id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
485,427 | https://en.wikipedia.org/wiki/Advanced%20Boolean%20Expression%20Language | The Advanced Boolean Expression Language (ABEL) is an obsolete hardware description language (HDL) and an associated set of design tools for programming programmable logic devices (PLDs). It was created in 1983 by Data I/O Corporation, in Redmond, Washington.
ABEL includes both concurrent equation and truth table logic formats as well as a sequential state machine description format. A preprocessor with syntax loosely based on Digital Equipment Corporation's MACRO-11 assembly language is also included.
In addition to being used for describing digital logic, ABEL may also be used to describe test vectors (patterns of inputs and expected outputs) that may be downloaded to a hardware PLD programmer along with the compiled and fuse-mapped PLD programming data.
Other PLD design languages originating in the same era include CUPL and PALASM. Since the advent of larger field-programmable gate arrays (FPGAs), PLD-specific HDLs have fallen out of favor as standard HDLs such as Verilog and VHDL gained adoption.
The ABEL concept and original compiler were created by Russell de Pina of Data I/O's Applied Research Group in 1981. The work was continued by ABEL product development team (led by Dr. Kyu Y. Lee) and included Mary Bailey, Bjorn Benson, Walter Bright, Michael Holley, Charles Olivier, and David Pellerin.
After a series of acquisitions, the ABEL toolchain and intellectual property were bought by Xilinx. Xilinx discontinued support for ABEL in its ISE Design Suite starting with version 11 (released in 2010).
References
External links
University of Pennsylvania's ABEL primer, as recommended by Walter Bright. Dead Link
University of Southern Maine ABEL-HDL Primer, by J. Van der Spiegel
Prentice Hall Publishers Digital Design Using ABEL, 1994, by David Pellerin and Michael Holley
Prentice Hall Publishers Practical Design Using Programmable Logic, 1991, by David Pellerin and Michael Holley
Hardware description languages | Advanced Boolean Expression Language | [
"Engineering"
] | 409 | [
"Electronic engineering",
"Hardware description languages"
] |
485,457 | https://en.wikipedia.org/wiki/Periodogram | In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods (see spectral estimation). It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms.
Definition
There are at least two different definitions in use today. One of them involves time-averaging, and one does not. Time-averaging is also the purview of other articles (Bartlett's method and Welch's method). This article is not about time-averaging. The definition of interest here is that the power spectral density of a continuous function, is the Fourier transform of its auto-correlation function (see Cross-correlation theorem, Spectral density, and Wiener–Khinchin theorem):
Computation
For sufficiently small values of parameter an arbitrarily-accurate approximation for can be observed in the region of the function:
which is precisely determined by the samples that span the non-zero duration of (see Discrete-time Fourier transform).
And for sufficiently large values of parameter , can be evaluated at an arbitrarily close frequency by a summation of the form:
where is an integer. The periodicity of allows this to be written very simply in terms of a Discrete Fourier transform:
where is a periodic summation:
When evaluated for all integers, , between 0 and -1, the array:
is a periodogram.
Applications
When a periodogram is used to examine the detailed characteristics of an FIR filter or window function, the parameter is chosen to be several multiples of the non-zero duration of the sequence, which is called zero-padding (see ). When it is used to implement a filter bank, is several sub-multiples of the non-zero duration of the sequence (see ).
One of the periodogram's deficiencies is that the variance at a given frequency does not decrease as the number of samples used in the computation increases. It does not provide the averaging needed to analyze noiselike signals or even sinusoids at low signal-to-noise ratios. Window functions and filter impulse responses are noiseless, but many other signals require more sophisticated methods of spectral estimation. Two of the alternatives use periodograms as part of the process:
The method of averaged periodograms, more commonly known as Welch's method, divides a long x[n] sequence into multiple shorter, and possibly overlapping, subsequences. It computes a windowed periodogram of each one, and computes an array average, i.e. an array where each element is an average of the corresponding elements of all the periodograms. For stationary processes, this reduces the noise variance of each element by approximately a factor equal to the reciprocal of the number of periodograms.
Smoothing is an averaging technique in frequency, instead of time. The smoothed periodogram is sometimes referred to as a spectral plot.
Periodogram-based techniques introduce small biases that are unacceptable in some applications. Other techniques that do not rely on periodograms are presented in the spectral density estimation article.
See also
Matched filter
Filtered backprojection (Radon transform)
Welch's method
Bartlett's method
Discrete-time Fourier transform
Least-squares spectral analysis, for computing periodograms in data that is not equally spaced
MUltiple SIgnal Classification (MUSIC), a popular parametric superresolution method
SAMV
Notes
References
Further reading
Frequency-domain analysis
Fourier analysis | Periodogram | [
"Physics"
] | 747 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)"
] |
485,472 | https://en.wikipedia.org/wiki/Earnshaw%27s%20theorem | Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. This was first proven by British mathematician Samuel Earnshaw in 1842.
It is usually cited in reference to magnetic fields, but was first applied to electrostatic field.
Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields). Earnshaw's theorem forbids magnetic levitation in many common situations.
If the materials are not hard, Werner Braunbeck's extension shows that materials with relative magnetic permeability greater than one (paramagnetism) are further destabilising, but materials with a permeability less than one (diamagnetic materials) permit stable configurations.
Explanation
In electrostatics
Informally, the case of a point charge in an arbitrary static electric field is a simple consequence of Gauss's law. For a particle to be in a stable equilibrium, small perturbations ("pushes") on the particle in any direction should not break the equilibrium; the particle should "fall back" to its previous position. This means that the force field lines around the particle's equilibrium position should all point inward, toward that position. If all of the surrounding field lines point toward the equilibrium point, then the divergence of the field at that point must be negative (i.e. that point acts as a sink). However, Gauss's law says that the divergence of any possible electric force field is zero in free space. In mathematical notation, an electrical force deriving from a potential will always be divergenceless (satisfy Laplace's equation):
Therefore, there are no local minima or maxima of the field potential in free space, only saddle points. A stable equilibrium of the particle cannot exist and there must be an instability in some direction. This argument may not be sufficient if all the second derivatives of U are null.
To be completely rigorous, strictly speaking, the existence of a stable point does not require that all neighbouring force vectors point exactly toward the stable point; the force vectors could spiral in toward the stable point, for example. One method for dealing with this invokes the fact that, in addition to the divergence, the curl of any electric field in free space is also zero (in the absence of any magnetic currents).
In magnetostatics
It is also possible to prove this theorem directly from the force/energy equations for static magnetic dipoles (below). Intuitively, though, it is plausible that if the theorem holds for a single point charge then it would also hold for two opposite point charges connected together. In particular, it would hold in the limit where the distance between the charges is decreased to zero while maintaining the dipole moment – that is, it would hold for an electric dipole. But if the theorem holds for an electric dipole, then it will also hold for a magnetic dipole, since the (static) force/energy equations take the same form for both electric and magnetic dipoles.
As a practical consequence, this theorem also states that there is no possible static configuration of ferromagnets that can stably levitate an object against gravity, even when the magnetic forces are stronger than the gravitational forces.
Earnshaw's theorem has even been proven for the general case of extended bodies, and this is so even if they are flexible and conducting, provided they are not diamagnetic, as diamagnetism constitutes a (small) repulsive force, but no attraction.
There are, however, several exceptions to the rule's assumptions, which allow magnetic levitation.
In gravitostatics
Earnshaw's theorem applies to static gravitational fields.
Earnshaw's theorem applies in an inertial reference frame. But it is sometimes more natural to work in a rotating reference frame that contains a fictitious centrifugal force that violates the assumptions of Earnshaw's theorem. Points that are stationary in a rotating reference frame (but moving in an inertial frame) can be absolutely stable or absolutely unstable. For example, in the restricted three-body problem, the effective potential from the fictitious centrifugal force allows the Lagrange points L4 and L5 to lie at local maxima of the effective potential field even if there is only negligible mass at those locations. (Even though these Lagrange points lie at local maxima of the potential field rather than local minima, they are still absolutely stable in a certain parameter regime due to the fictitious velocity-dependent Coriolis force, which is not captured by the scalar potential field.)
Effect on physics
For quite some time, Earnshaw's theorem posed a startling question of why matter is stable and holds together, since much evidence was found that matter was held together electromagnetically despite the proven instability of static charge configurations. Since Earnshaw's theorem only applies to stationary charges, there were attempts to explain stability of atoms using planetary models, such as Nagaoka's Saturnian model (1904) and Rutherford's planetary model (1911), where the point electrons are circling a positive point charge in the center. Yet, the stability of such planetary models was immediately questioned: electrons have nonzero acceleration when moving along a circle, and hence they would radiate the energy via a non-stationary electromagnetic field. Bohr's model of 1913 formally prohibited this radiation without giving an explanation for its absence.
On the other hand, Earnshaw's theorem only applies to point charges, but not to distributed charges. This led J. J. Thomson in 1904 to his plum pudding model, where the negative point charges (electrons, or "plums") are embedded into a distributed positive charge "pudding", where they could be either stationary or moving along circles; this is a configuration which is non-point positive charges (and also non-stationary negative charges), not covered by Earnshaw's theorem. Eventually this led the way to Schrödinger's model of 1926, where the existence of non-radiative states in which the electron is not a point but rather a distributed charge density resolves the above conundrum at a fundamental level: not only there was no contradiction to Earnshaw's theorem, but also the resulting charge density and the current density are stationary, and so is the corresponding electromagnetic field, no longer radiating the energy to infinity. This gave a quantum mechanical explanation of the stability of the atom.
At a more practical level, it can be said that the Pauli exclusion principle and the existence of discrete electron orbitals are responsible for making bulk matter rigid.
Proofs for magnetic dipoles
Introduction
While a more general proof may be possible, three specific cases are considered here. The first case is a magnetic dipole of constant magnitude that has a fast (fixed) orientation. The second and third cases are magnetic dipoles where the orientation changes to remain aligned either parallel or antiparallel to the field lines of the external magnetic field. In paramagnetic and diamagnetic materials the dipoles are aligned parallel and antiparallel to the field lines, respectively.
Background
The proofs considered here are based on the following principles.
The energy U of a magnetic dipole with a magnetic dipole moment M in an external magnetic field B is given by
The dipole will only be stably levitated at points where the energy has a minimum. The energy can only have a minimum at points where the Laplacian of the energy is greater than zero. That is, where
Finally, because both the divergence and the curl of a magnetic field are zero (in the absence of current or a changing electric field), the Laplacians of the individual components of a magnetic field are zero. That is,
This is proven at the very end of this article as it is central to understanding the overall proof.
Summary of proofs
For a magnetic dipole of fixed orientation (and constant magnitude) the energy will be given by
where Mx, My and Mz are constant. In this case the Laplacian of the energy is always zero,
so the dipole can have neither an energy minimum nor an energy maximum. That is, there is no point in free space where the dipole is either stable in all directions or unstable in all directions.
Magnetic dipoles aligned parallel or antiparallel to an external field with the magnitude of the dipole proportional to the external field will correspond to paramagnetic and diamagnetic materials respectively. In these cases the energy will be given by
where k is a constant greater than zero for paramagnetic materials and less than zero for diamagnetic materials.
In this case, it will be shown that
which, combined with the constant , shows that paramagnetic materials can have energy maxima but not energy minima and diamagnetic materials can have energy minima but not energy maxima. That is, paramagnetic materials can be unstable in all directions but not stable in all directions and diamagnetic materials can be stable in all directions but not unstable in all directions. Of course, both materials can have saddle points.
Finally, the magnetic dipole of a ferromagnetic material (a permanent magnet) that is aligned parallel or antiparallel to a magnetic field will be given by
so the energy will be given by
but this is just the square root of the energy for the paramagnetic and diamagnetic case discussed above and, since the square root function is monotonically increasing, any minimum or maximum in the paramagnetic and diamagnetic case will be a minimum or maximum here as well. There are, however, no known configurations of permanent magnets that stably levitate so there may be other reasons not discussed here why it is not possible to maintain permanent magnets in orientations antiparallel to magnetic fields (at least not without rotation—see spin-stabilized magnetic levitation.
Detailed proofs
Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point charges. The proofs presented here for individual dipoles should be generalizable to collections of magnetic dipoles because they are formulated in terms of energy, which is additive. A rigorous treatment of this topic is, however, currently beyond the scope of this article.
Fixed-orientation magnetic dipole
It will be proven that at all points in free space
The energy U of the magnetic dipole M in the external magnetic field B is given by
The Laplacian will be
Expanding and rearranging the terms (and noting that the dipole M is constant) we have
but the Laplacians of the individual components of a magnetic field are zero in free space (not counting electromagnetic radiation) so
which completes the proof.
Magnetic dipole aligned with external field lines
The case of a paramagnetic or diamagnetic dipole is considered first. The energy is given by
Expanding and rearranging terms,
but since the Laplacian of each individual component of the magnetic field is zero,
and since the square of a magnitude is always positive,
As discussed above, this means that the Laplacian of the energy of a paramagnetic material can never be positive (no stable levitation) and the Laplacian of the energy of a diamagnetic material can never be negative (no instability in all directions).
Further, because the energy for a dipole of fixed magnitude aligned with the external field will be the square root of the energy above, the same analysis applies.
Laplacian of individual components of a magnetic field
It is proven here that the Laplacian of each individual component of a magnetic field is zero. This shows the need to invoke the properties of magnetic fields that the divergence of a magnetic field is always zero and the curl of a magnetic field is zero in free space. (That is, in the absence of current or a changing electric field.) See Maxwell's equations for a more detailed discussion of these properties of magnetic fields.
Consider the Laplacian of the x component of the magnetic field
Because the curl of B is zero,
and
so we have
But since Bx is continuous, the order of differentiation doesn't matter giving
The divergence of B is zero,
so
The Laplacian of the y component of the magnetic field By field and the Laplacian of the z component of the magnetic field Bz can be calculated analogously. Alternatively, one can use the identity
where both terms in the parentheses vanish.
Loopholes
Earnshaw's theorem has no exceptions for non-moving permanent ferromagnets. However, Earnshaw's theorem does not necessarily apply to moving ferromagnets, certain electromagnetic systems, pseudo-levitation and diamagnetic materials. These can thus seem to be exceptions, though in fact they exploit the constraints of the theorem.
Spin-stabilized magnetic levitation: Spinning ferromagnets (such as the Levitron) can, while spinning, magnetically levitate using only permanent ferromagnets, the system adding gyroscopic forces. (The spinning ferromagnet is not a "non-moving ferromagnet").
Switching the polarity of an electromagnet or system of electromagnets can levitate a system by continuous expenditure of energy. Maglev trains are one application.
Pseudo-levitation constrains the movement of the magnets usually using some form of a tether or wall. This works because the theorem shows only that there is some direction in which there will be an instability. Limiting movement in that direction allows levitation with fewer than the full 3 dimensions available for movement (note that the theorem is proven for 3 dimensions, not 1D or 2D).
Diamagnetic materials are excepted because they exhibit only repulsion against the magnetic field, whereas the theorem requires materials that have both repulsion and attraction. An example of this is the famous levitating frog (see Diamagnetism).
See also
Electrostatic levitation
Magnetic levitation
References
External links
"Levitation Possible", a discussion of Earnshaw's theorem and its consequences for levitation, along with several ways to levitate with electromagnetic fields
Electrostatics
Eponymous theorems of physics
Levitation
No-go theorems | Earnshaw's theorem | [
"Physics"
] | 2,979 | [
"Physical phenomena",
"No-go theorems",
"Equations of physics",
"Levitation",
"Eponymous theorems of physics",
"Motion (physics)",
"Physics theorems"
] |
486,341 | https://en.wikipedia.org/wiki/Surface-wave-sustained%20discharge | A surface-wave-sustained discharge is a plasma that is excited by propagation of electromagnetic surface waves. Surface wave plasma sources can be divided into two groups depending upon whether the plasma generates part of its own waveguide by ionisation or not. The former is called a self-guided plasma. The surface wave mode allows the generation of uniform high-frequency-excited plasmas in volumes whose lateral dimensions extend over several wavelengths of the electromagnetic wave, e.g. for microwaves of 2.45 GHz in vacuum the wavelength amounts to 12.2 cm.
Theory
For a long time, microwave plasma sources without a magnetic field were not considered suitable for the generation of high density plasmas. Electromagnetic waves cannot propagate in over-dense plasmas. The wave is reflected at the plasma surface due to the skin effect and becomes an evanescent wave. Its penetration depth corresponds to the
skin depth ,
which can be approximated by
The non-vanishing penetration depth of an evanescent wave opens an alternative way of heating a plasma: Instead of traversing the
plasma, the conductivity of the plasma enables the wave to propagate along the plasma surface. The wave energy is then transferred to the plasma by an evanescent wave which enters the plasma perpendicular to its surface and decays exponentially with the skin depth. Transfer mechanism allows to generate over-dense plasmas with electron densities beyond the critical density.
Design
Surface-wave-sustained plasmas (SWP) can be operated in a large variety of recipient geometries. The pressure range accessible for surface-wave-excited plasmas depends on the process gas and the diameter of the recipient. The larger the chamber diameter, the lower the minimal pressure necessary for the SWP mode. Analogously, the maximal pressure where a stable SWP can be operated decreases with increasing diameter.
The numerical modelling of SWPs is quite involved. The plasma is created by the electromagnetic wave, but it also reflects and guides this same wave. Therefore, a truly self-consistent description is necessary.
References
Waves in plasmas
Surface waves | Surface-wave-sustained discharge | [
"Physics"
] | 419 | [
"Waves in plasmas",
"Physical phenomena",
"Plasma physics",
"Surface waves",
"Plasma phenomena",
"Waves",
"Plasma physics stubs"
] |
486,436 | https://en.wikipedia.org/wiki/Rotational%E2%80%93vibrational%20spectroscopy | Rotational–vibrational spectroscopy is a branch of molecular spectroscopy that is concerned with infrared and Raman spectra of molecules in the gas phase. Transitions involving changes in both vibrational and rotational states can be abbreviated as rovibrational (or ro-vibrational) transitions. When such transitions emit or absorb photons (electromagnetic radiation), the frequency is proportional to the difference in energy levels and can be detected by certain kinds of spectroscopy. Since changes in rotational energy levels are typically much smaller than changes in vibrational energy levels, changes in rotational state are said to give fine structure to the vibrational spectrum. For a given vibrational transition, the same theoretical treatment as for pure rotational spectroscopy gives the rotational quantum numbers, energy levels, and selection rules. In linear and spherical top molecules, rotational lines are found as simple progressions at both higher and lower frequencies relative to the pure vibration frequency. In symmetric top molecules the transitions are classified as parallel when the dipole moment change is parallel to the principal axis of rotation, and perpendicular when the change is perpendicular to that axis. The ro-vibrational spectrum of the asymmetric rotor water is important because of the presence of water vapor in the atmosphere.
Overview
Ro-vibrational spectroscopy concerns molecules in the gas phase. There are sequences of quantized rotational levels associated with both the ground and excited vibrational states. The spectra are often resolved into lines due to transitions from one rotational level in the ground vibrational state to one rotational level in the vibrationally excited state. The lines corresponding to a given vibrational transition form a band.
In the simplest cases the part of the infrared spectrum involving vibrational transitions with the same rotational quantum number (ΔJ = 0) in ground and excited states is called the Q-branch. On the high frequency side of the Q-branch the energy of rotational transitions is added to the energy of the vibrational transition. This is known as the R-branch of the spectrum for ΔJ = +1. The P-branch for ΔJ = −1 lies on the low wavenumber side of the Q branch. The appearance of the R-branch is very similar to the appearance of the pure rotation spectrum (but shifted to much higher wavenumbers), and the P-branch appears as a nearly mirror image of the R-branch. The Q branch is sometimes missing because of transitions with no change in J being forbidden.
The appearance of rotational fine structure is determined by the symmetry of the molecular rotors which are classified, in the same way as for pure rotational spectroscopy, into linear molecules, spherical-, symmetric- and asymmetric- rotor classes. The quantum mechanical treatment of rotational fine structure is the same as for pure rotation.
The strength of an absorption line is related to the number of molecules with the initial values of the vibrational quantum number ν and the rotational quantum number , and depends on temperature. Since there are actually states with rotational quantum number , the population with value increases with initially, and then decays at higher . This gives the characteristic shape of the P and R branches.
A general convention is to label quantities that refer to the vibrational ground and excited states of a transition with double prime and single prime, respectively. For example, the rotational constant for the ground state is written as and that of the excited state as
Also, these constants are expressed in the molecular spectroscopist's units of cm−1. so that in this article corresponds to in the definition of rotational constant at Rigid rotor.
Method of combination differences
Numerical analysis of ro-vibrational spectral data would appear to be complicated by the fact that the wavenumber for each transition depends on two rotational constants, and . However combinations which depend on only one rotational constant are found by subtracting wavenumbers of pairs of lines (one in the P-branch and one in the R-branch) which have either the same lower level or the same upper level. For example, in a diatomic molecule the line denoted P(J + 1) is due to the transition (v = 0, J + 1) → (v = 1, J) (meaning a transition from the state with vibrational quantum number ν going from 0 to 1 and the rotational quantum number going from some value J + 1 to J, with J > 0), and the line R(J − 1) is due to the transition (v = 0, J − 1) → (v = 1, J). The difference between the two wavenumbers corresponds to the energy difference between the (J + 1) and (J − 1) levels of the lower vibrational state and is denoted by since it is the difference between levels differing by two units of J. If centrifugal distortion is included, it is given by
where means the frequency (or wavenumber) of the given line. The main term, comes from the difference in the energy of the rotational state, and that of the state,
The rotational constant of the ground vibrational state B′′ and centrifugal distortion constant, D′′ can be found by least-squares fitting this difference as a function of J. The constant B′′ is used to determine the internuclear distance in the ground state as in pure rotational spectroscopy. (See Appendix)
Similarly the difference R(J) − P(J) depends only on the constants B′ and D′ for the excited vibrational state (v = 1), and B′ can be used to determine the internuclear distance in that state (which is inaccessible to pure rotational spectroscopy).
Linear molecules
Heteronuclear diatomic molecules
Diatomic molecules with the general formula AB have one normal mode of vibration involving stretching of the A-B bond. The vibrational term values , for an anharmonic oscillator are given, to a first approximation, by
where v is a vibrational quantum number, ωe is the harmonic wavenumber and χe is an anharmonicity constant.
When the molecule is in the gas phase, it can rotate about an axis, perpendicular to the molecular axis, passing through the centre of mass of the molecule. The rotational energy is also quantized, with term values to a first approximation given by
where J is a rotational quantum number and D is a centrifugal distortion constant. The rotational constant, Bv depends on the moment of inertia of the molecule, Iv, which varies with the vibrational quantum number, v
where mA and mB are the masses of the atoms A and B, and d represents the distance between the atoms. The term values of the ro-vibrational states are found (in the Born–Oppenheimer approximation) by combining the expressions for vibration and rotation.
The first two terms in this expression correspond to a harmonic oscillator and a rigid rotor, the second pair of terms make a correction for anharmonicity and centrifugal distortion. A more general expression was given by Dunham.
The selection rule for electric dipole allowed ro-vibrational transitions, in the case of a diamagnetic diatomic molecule is
The transition with Δv=±1 is known as the fundamental transition. The selection rule has two consequences.
Both the vibrational and rotational quantum numbers must change. The transition : (Q-branch) is forbidden
The energy change of rotation can be either subtracted from or added to the energy change of vibration, giving the P- and R- branches of the spectrum, respectively.
The calculation of the transition wavenumbers is more complicated than for pure rotation because the rotational constant Bν is different in the ground and excited vibrational states. A simplified expression for the wavenumbers is obtained when the centrifugal distortion constants and are approximately equal to each other.
where positive m values refer to the R-branch and negative values refer to the P-branch. The term ω0 gives the position of the (missing) Q-branch, the term implies an progression of equally spaced lines in the P- and R- branches, but the third term, shows that the separation between adjacent lines changes with changing rotational quantum number. When is greater than , as is usually the case, as J increases the separation between lines decreases in the R-branch and increases in the P-branch. Analysis of data from the infrared spectrum of carbon monoxide, gives value of of 1.915 cm−1 and of 1.898 cm−1. The bond lengths are easily obtained from these constants as r0 = 113.3 pm, r1 = 113.6 pm. These bond lengths are slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by
where ν is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated when the B values for two different vibrational states can be found. For carbon monoxide req = 113.0 pm.
Nitric oxide, NO, is a special case as the molecule is paramagnetic, with one unpaired electron. Coupling of the electron spin angular momentum with the molecular vibration causes lambda-doubling with calculated harmonic frequencies of 1904.03 and 1903.68 cm−1. Rotational levels are also split.
Homonuclear diatomic molecules
The quantum mechanics for homonuclear diatomic molecules such as dinitrogen, N2, and fluorine, F2, is qualitatively the same as for heteronuclear diatomic molecules, but the selection rules governing transitions are different. Since the electric dipole moment of the homonuclear diatomics is zero, the fundamental vibrational transition is electric-dipole-forbidden and the molecules are infrared inactive. However, a weak quadrupole-allowed spectrum of N2 can be observed when using long path-lengths both in the laboratory and in the atmosphere. The spectra of these molecules can be observed by Raman spectroscopy because the molecular vibration is Raman-allowed.
Dioxygen is a special case as the molecule is paramagnetic so magnetic-dipole-allowed transitions can be observed in the infrared. The unit electron spin has three spatial orientations with respect to the molecular rotational angular momentum vector, N, so that each rotational level is split into three states with total angular momentum (molecular rotation plus electron spin) , J = N + 1, N, and N - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number N there are two allowed transitions. The 16O nucleus has zero nuclear spins angular momentum, so that symmetry considerations demand that N may only have odd values.
Raman spectra of diatomic molecules
The selection rule is
so that the spectrum has an O-branch (∆J = −2), a Q-branch (∆J = 0) and an S-branch (∆J=+2). In the approximation that B′′ = B′ = B the wavenumbers are given by
since the S-branch starts at J=0 and the O-branch at J=2. So, to a first approximation, the separation between S(0) and O(2) is 12B and the separation between adjacent lines in both O- and S- branches is 4B. The most obvious effect of the fact that B′′ ≠ B′ is that the Q-branch has a series of closely spaced side lines on the low-frequency side due to transitions in which ΔJ=0 for J=1,2 etc. Useful difference formulae, neglecting centrifugal distortion are as follows.
Molecular oxygen is a special case as the molecule is paramagnetic, with two unpaired electrons.
For homonuclear diatomics, nuclear spin statistical weights lead to alternating line intensities between even- and odd- levels. For nuclear spin I = 1/2 as in 1H2 and 19F2 the intensity alternation is 1:3. For 2H2 and 14N2, I=1 and the statistical weights are 6 and 3 so that the even- levels are twice as intense. For 16O2 (I=0) all transitions with even values of are forbidden.
Polyatomic linear molecules
These molecules fall into two classes, according to symmetry: centrosymmetric molecules with point group D∞h, such as carbon dioxide, CO2, and ethyne or acetylene, HCCH; and non-centrosymmetric molecules with point group C∞v such as hydrogen cyanide, HCN, and nitrous oxide, NNO. Centrosymmetric linear molecules have a dipole moment of zero, so do not show a pure rotation spectrum in the infrared or microwave regions. On the other hand, in certain vibrational excited states the molecules do have a dipole moment so that a ro-vibrational spectrum can be observed in the infrared.
The spectra of these molecules are classified according to the direction of the dipole moment change vector. When the vibration induces a dipole moment change pointing along the molecular axis the term parallel is applied, with the symbol . When the vibration induces a dipole moment pointing perpendicular to the molecular axis the term perpendicular is applied, with the symbol . In both cases the P- and R- branch wavenumbers follow the same trend as in diatomic molecules. The two classes differ in the selection rules that apply to ro-vibrational transitions. For parallel transitions the selection rule is the same as for diatomic molecules, namely, the transition corresponding to the Q-branch is forbidden. An example is the C-H stretching mode of hydrogen cyanide.
For a perpendicular vibration the transition ΔJ=0 is allowed. This means that the transition is allowed for the molecule with the same rotational quantum number in the ground and excited vibrational state, for all the populated rotational states. This makes for an intense, relatively broad, Q-branch consisting of overlapping lines due to each rotational state. The N-N-O bending mode of nitrous oxide, at ca. 590 cm−1 is an example.
The spectra of centrosymmetric molecules exhibit alternating line intensities due to quantum state symmetry effects, since rotation of the molecule by 180° about a 2-fold rotation axis is equivalent to exchanging identical nuclei. In carbon dioxide, the oxygen atoms of the predominant isotopic species 12C16O2 have spin zero and are bosons, so that the total wavefunction must be symmetric when the two 16O nuclei are exchanged. The nuclear spin factor is always symmetric for two spin-zero nuclei, so that the rotational factor must also be symmetric which is true only for even-J levels. The odd-J rotational levels cannot exist and the allowed vibrational bands consist of only absorption lines from even-J initial levels. The separation between adjacent lines in the P- and R- branches is close to 4B rather than 2B as alternate lines are missing. For acetylene the hydrogens of 1H12C12C1H have spin-1/2 and are fermions, so the total wavefunction is antisymmetric when two 1H nuclei are exchanged. As is true for ortho and para hydrogen the nuclear spin function of the two hydrogens has three symmetric ortho states and one antisymmetric para states. For the three ortho states, the rotational wave function must be antisymmetric corresponding to odd J, and for the one para state it is symmetric corresponding to even J. The population of the odd J levels are therefore three times higher than the even J levels, and alternate line intensities are in the ratio 3:1.Straughan and Walker vol2, pp 186−8
Spherical top molecules
These molecules have equal moments of inertia about any axis, and belong to the point groups Td (tetrahedral AX4) and Oh (octahedral AX6). Molecules with these symmetries have a dipole moment of zero, so do not have a pure rotation spectrum in the infrared or microwave regions.
Tetrahedral molecules such as methane, CH4, have infrared-active stretching and bending vibrations, belonging to the T2 (sometimes written as F2) representation. These vibrations are triply degenerate and the rotational energy levels have three components separated by the Coriolis interaction. The rotational term values are given, to a first order approximation, by
where is a constant for Coriolis coupling. The selection rule for a fundamental vibration is
Thus, the spectrum is very much like the spectrum from a perpendicular vibration of a linear molecule, with a strong Q-branch composed of many transitions in which the rotational quantum number is the same in the vibrational ground and excited states, The effect of Coriolis coupling is clearly visible in the C-H stretching vibration of methane, though detailed study has shown that the first-order formula for Coriolis coupling, given above, is not adequate for methane.
Symmetric top molecules
These molecules have a unique principal rotation axis of order 3 or higher. There are two distinct moments of inertia and therefore two rotational constants. For rotation about any axis perpendicular to the unique axis, the moment of inertia is and the rotational constant is , as for linear molecules. For rotation about the unique axis, however, the moment of inertia is and the rotational constant is . Examples include ammonia, NH3 and methyl chloride, CH3Cl (both of molecular symmetry described by point group C3v), boron trifluoride, BF3 and phosphorus pentachloride, PCl5 (both of point group D3h), and benzene, C6H6 (point group D6h).
For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. As with linear molecules, transitions are classified as parallel, or perpendicular,, in this case according to the direction of the dipole moment change with respect to the principal rotation axis. A third category involves certain overtones and combination bands which share the properties of both parallel and perpendicular transitions. The selection rules are
If K ≠ 0, then ΔJ = 0, ±1 and ΔK = 0
If K = 0, then ΔJ = ±1 and ΔK = 0
ΔJ = 0, ±1 and ΔK = ±1
The fact that the selection rules are different is the justification for the classification and it means that the spectra have a different appearance which can often be immediately recognized.
An expression for the calculated wavenumbers of the P- and R- branches may be given as
in which m = J+1 for the R-branch and -J for the P-branch. The three centrifugal distortion constants , and are needed to fit the term values of each level. The wavenumbers of the sub-structure corresponding to each band are given by
represents the Q-branch of the sub-structure, whose position is given by
.
Parallel bands
The C-Cl stretching vibration of methyl chloride, CH3Cl, gives a parallel band since the dipole moment change is aligned with the 3-fold rotation axis. The line spectrum shows the sub-structure of this band rather clearly; in reality, very high resolution spectroscopy would be needed to resolve the fine structure fully. Allen and Cross show parts of the spectrum of CH3D and give a detailed description of the numerical analysis of the experimental data.
Perpendicular bands
The selection rule for perpendicular bands give rise to more transitions than with parallel bands. A band can be viewed as a series of sub-structures, each with P, Q and R branches. The Q-branches are separated by approximately 2(A′-B′). The asymmetric HCH bending vibration of methyl chloride is typical. It shows a series of intense Q-branches with weak rotational fine structure. Analysis of the spectra is made more complicated by the fact that the ground-state vibration is bound, by symmetry, to be a degenerate vibration, which means that Coriolis coupling also affects the spectrum.
Hybrid bands
Overtones of a degenerate fundamental vibration have components of more than one symmetry type. For example, the first overtone of a vibration belonging to the E representation in a molecule like ammonia, NH3, will have components belonging to A1 and E representations. A transition to the A1 component will give a parallel band and a transition to the E'' component will give perpendicular bands; the result is a hybrid band.
Inversion in ammonia
For ammonia, NH3, the symmetric bending vibration is observed as two branches near 930 cm−1 and 965 cm−1. This so-called inversion doubling arises because the symmetric bending vibration is actually a large-amplitude motion known as inversion, in which the nitrogen atom passes through the plane of the three hydrogen atoms, similar to the inversion of an umbrella. The potential energy curve for such a vibration has a double minimum for the two pyramidal geometries, so that the vibrational energy levels occur in pairs which correspond to combinations of the vibrational states in the two potential minima. The two v = 1 states combine to form a symmetric state (1+) at 932.5 cm−1 above the ground (0+) state and an antisymmetric state (1−) at 968.3 cm−1.
The vibrational ground state (v = 0) is also doubled although the energy difference is much smaller, and the transition between the two levels can be measured directly in the microwave region, at ca. 24 GHz (0.8 cm−1). This transition is historically significant and was used in the ammonia maser, the fore-runner of the laser.
Asymmetric top molecules
Asymmetric top molecules have at most one or more 2-fold rotation axes. There are three unequal moments of inertia about three mutually perpendicular principal axes. The spectra are very complex. The transition wavenumbers cannot be expressed in terms of an analytical formula but can be calculated using numerical methods.
The water molecule is an important example of this class of molecule, particularly because of the presence of water vapor in the atmosphere. The low-resolution spectrum shown in green illustrates the complexity of the spectrum. At wavelengths greater than 10 μm (or wavenumbers less than 1000 cm−1) the absorption is due to pure rotation. The band around 6.3 μm (1590 cm−1) is due to the HOH bending vibration; the considerable breadth of this band is due to the presence of extensive rotational fine structure. High-resolution spectra of this band are shown in Allen and Cross, p 221. The symmetric and asymmetric stretching vibrations are close to each other, so the rotational fine structures of these bands overlap. The bands at shorter wavelength are overtones and combination bands, all of which show rotational fine structure. Medium resolution spectra of the bands around 1600 cm−1 and 3700 cm−1 are shown in Banwell and McCash, p91.
Ro-vibrational bands of asymmetric top molecules are classed as A-, B- or C- type for transitions in which the dipole moment change is along the axis of smallest moment of inertia to the highest.
Experimental methods
Ro-vibrational spectra are usually measured at high spectral resolution. In the past, this was achieved by using an echelle grating as the spectral dispersion element in a grating spectrometer. This is a type of diffraction grating optimized to use higher diffraction orders. Today at all resolutions the preferred method is FTIR. The primary reason for this is that infrared detectors are inherently noisy, and FTIR detects summed signals at multiple wavelengths simultaneously achieving a higher signal to noise by virtue of Fellgett's advantage for multiplexed methods. The resolving power of an FTIR spectrometer depends on the maximum retardation of the moving mirror. For example, to achieve a resolution of 0.1 cm−1, the moving mirror must have a maximum displacement of 10 cm from its position at zero path difference. Connes measured the vibration-rotation spectrum of Venusian CO2 at this resolution. A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage of FTIR is important for high-resolution spectroscopy as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits.
When measuring the spectra of gases it is relatively easy to obtain very long path-lengths by using a multiple reflection cell. This is important because it allows the pressure to be reduced so as to minimize pressure broadening of the spectral lines, which may degrade resolution. Path lengths up to 20m are commercially available.
Appendix
The method of combination differences uses differences of wavenumbers in the P- and R- branches to obtain data that depend only on rotational constants in the vibrational ground or excited state. For the excited state
This function can be fitted, using the method of least-squares to data for carbon monoxide, from Harris and Bertolucci. The data calculated with the formula
in which centrifugal distortion is ignored, are shown in the columns labelled with (1). This formula implies that the data should lie on a straight line with slope 2B′′ and intercept zero. At first sight the data appear to conform to this model, with a root mean square residual of 0.21 cm−1. However, when centrifugal distortion is included, using the formula
the least-squares fit is improved markedly, with ms residual decreasing to 0.000086 cm−1. The calculated data are shown in the columns labelled with (2).
Notes
References
Bibliography
Chapter (Molecular Spectroscopy), Section (Vibration-rotation spectra) and page numbers may be different in different editions.
External links
Infrared gas spectra simulator
NIST Diatomic Spectral Database
NIST Triatomic Spectral Database
NIST Hydrocarbon Spectral Database
Chemical physics
Spectroscopy | Rotational–vibrational spectroscopy | [
"Physics",
"Chemistry"
] | 5,456 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Molecular physics",
"Instrumental analysis",
"nan",
"Spectroscopy",
"Chemical physics"
] |
486,525 | https://en.wikipedia.org/wiki/Surface%20modification | Surface modification is the act of modifying the surface of a material by bringing physical, chemical or biological characteristics different from the ones originally found on the surface of a material.
This modification is usually made to solid materials, but it is possible to find examples of the modification to the surface of specific liquids.
The modification can be done by different methods with a view to altering a wide range of characteristics of the surface, such as: roughness, hydrophilicity, surface charge, surface energy, biocompatibility and reactivity.
Surface engineering
Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry, mechanical engineering, and electrical engineering (particularly in relation to semiconductor manufacturing).
Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the Surface phase. It acts as an interface to the surrounding environment. The bulk material in a solid is called the Bulk phase.
The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear, corrosion, fatigue and creep.
Surface engineering involves altering the properties of the Surface Phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used.
Applications and Future of Surface Engineering
Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, textile, petroleum, petrochemical, chemical, steel, power, cement, machine tools, construction industries. Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C3N4), graded deposits, multi-component deposits etc.
In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market.
Functionalization of Antimicrobial Surfaces is a unique technology that can be used for sterilization in health industry, self-cleaning surfaces and protection from bio films.
In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, diffusion, thermal spray & welding using advanced heat sources like plasma, laser, ion, electron, microwave, solar beams, synchrotron radiation, pulsed arc, pulsed combustion, spark, friction and induction.
It's estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries.
Surface functionalization
Surface functionalization introduces chemical functional groups to a surface. This way, materials with functional groups on their surfaces can be designed from substrates with standard bulk material properties. Prominent examples can be found in semiconductor industry and biomaterial research.
Polymer Surface Functionalization
Plasma processing technologies are successfully employed for polymers surface functionalization.
See also
Surface finishing
Surface science
Tribology
Surface metrology
Surface modification of biomaterials with proteins
Flame treatment
References
Bibliography
R.Chattopadhyay, ’Advanced Thermally Assisted Surface Engineering Processes’ Kluwer Academic Publishers, MA, USA (now Springer, NY), 2004, , E-.
R Chattopadhyay, ’Surface Wear- Analysis, Treatment, & Prevention’, ASM-International, Materials Park, OH, USA, 2001, .
S Konda, Flame‐based synthesis and in situ functionalization of palladium alloy nanoparticles, AIChE Journal, 2018, https://onlinelibrary.wiley.com/doi/full/10.1002/aic.16368
External links
Institute of Surface Chemistry and Catalysis Ulm University
Engineering disciplines
Materials science | Surface modification | [
"Physics",
"Materials_science",
"Engineering"
] | 863 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
487,354 | https://en.wikipedia.org/wiki/Cover%20%28telecommunications%29 | In telecommunications and tradecraft, cover is the technique of concealing or altering the characteristics of communications patterns for the purpose of denying an unauthorized receiver information that would be of value.
The purpose of cover is not to make the communication secure, but to make it look like noise, rendering it uninteresting and not worth analysis. Even if an attacker recognizes the communication as interesting, cover makes traffic analysis more difficult since he must crack the cover before he can find out to whom it is addressed.
Usually, the covered communication is also encrypted. In this way, enemies have no idea you sent a message; friends know you sent a message, but don't know what you said; the intended recipient knows what you said.
Technically, cover sometimes refers to the specific process of modulo two additions of a pseudorandom bit stream generated by a cryptographic device with bits from the control message.
Source: from Federal Standard 1037C and from MIL-STD-188
References
Cryptography
es:cobertura | Cover (telecommunications) | [
"Mathematics",
"Engineering"
] | 208 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
487,433 | https://en.wikipedia.org/wiki/Alkylation | Alkylation is a chemical reaction that entails transfer of an alkyl group. The alkyl group may be transferred as an alkyl carbocation, a free radical, a carbanion, or a carbene (or their equivalents). Alkylating agents are reagents for effecting alkylation. Alkyl groups can also be removed in a process known as dealkylation. Alkylating agents are often classified according to their nucleophilic or electrophilic character. In oil refining contexts, alkylation refers to a particular alkylation of isobutane with olefins. For upgrading of petroleum, alkylation produces a premium blending stock for gasoline. In medicine, alkylation of DNA is used in chemotherapy to damage the DNA of cancer cells. Alkylation is accomplished with the class of drugs called alkylating antineoplastic agents.
Nucleophilic alkylating agents
Nucleophilic alkylating agents deliver the equivalent of an alkyl anion (carbanion). The formal "alkyl anion" attacks an electrophile, forming a new covalent bond between the alkyl group and the electrophile. The counterion, which is a cation such as lithium, can be removed and washed away in the work-up. Examples include the use of organometallic compounds such as Grignard (organomagnesium), organolithium, organocopper, and organosodium reagents. These compounds typically can add to an electron-deficient carbon atom such as at a carbonyl group. Nucleophilic alkylating agents can displace halide substituents on a carbon atom through the SN2 mechanism. With a catalyst, they also alkylate alkyl and aryl halides, as exemplified by Suzuki couplings.
The SN2 mechanism is not available for aryl substituents, where the trajectory to attack the carbon atom would be inside the ring. Thus, only reactions catalyzed by organometallic catalysts are possible.
Alkylation by carbon electrophiles
C-alkylation
C-alkylation is a process for the formation of carbon-carbon bonds. The largest example of this takes place in the alkylation units of petrochemical plants, which convert low-molecular-weight alkenes into high octane gasoline components. Electron-rich species such as phenols are also commonly alkylated to produce a variety of products; examples include linear alkylbenzenes used in the production of surfactants like LAS, or butylated phenols like BHT, which are used as antioxidants. This can be achieved using either acid catalysts like Amberlyst, or Lewis acids like aluminium. On a laboratory scale the Friedel–Crafts reaction uses alkyl halides, as these are often easier to handle than their corresponding alkenes, which tend to be gasses. The reaction is catalysed by aluminium trichloride. This approach is rarely used industrially as alkyl halides are more expensive than alkenes.
N-,P-, S- alkylation
N-, P-, and S-alkylation are important processes for the formation of carbon-nitrogen, carbon-phosphorus, and carbon-sulfur bonds,
Amines are readily alkylated. The rate of alkylation follows the order tertiary amine < secondary amine < primary amine. Typical alkylating agents are alkyl halides. Industry often relies on green chemistry methods involving alkylation of amines with alcohols, the byproduct being water. Hydroamination is another green method for N-alkylation.
In the Menshutkin reaction, a tertiary amine is converted into a quaternary ammonium salt by reaction with an alkyl halide. Similar reactions occur when tertiary phosphines are treated with alkyl halides, the products being phosphonium salts.
Thiols are readily alkylated to give thioethers via the thiol-ene reaction. The reaction is typically conducted in the presence of a base or using the conjugate base of the thiol. Thioethers undergo alkylation to give sulfonium ions.
O-alkylation
Alcohols alkylate to give ethers:
R-OH + R'-X -> R-O-R'
When the alkylating agent is an alkyl halide, the conversion is called the Williamson ether synthesis.
Alcohols are also good alkylating agents in the presence of suitable acid catalysts. For example, most methyl amines are prepared by alkylation of ammonia with methanol. The alkylation of phenols is particularly straightforward since it is subject to fewer competing reactions.
Ph-O- + Me2-SO4 -> Ph-O-Me + Me-SO4-
(with as a spectator ion)
More complex alkylation of a alcohols and phenols involve ethoxylation. Ethylene oxide is the alkylating group in this reaction.
Oxidative addition to metals
In the process called oxidative addition, low-valent metals often react with alkylating agents to give metal alkyls. This reaction is one step in the Cativa process for the synthesis of acetic acid from methyl iodide. Many cross coupling reactions proceed via oxidative addition as well.
Electrophilic alkylating agents
Electrophilic alkylating agents deliver the equivalent of an alkyl cation. Alkyl halides are typical alkylating agents. Trimethyloxonium tetrafluoroborate and triethyloxonium tetrafluoroborate are particularly strong electrophiles due to their overt positive charge and an inert leaving group (dimethyl or diethyl ether). Dimethyl sulfate is intermediate in electrophilicity.
Methylation with diazomethane
Diazomethane is a popular methylating agent in the laboratory, but it is too hazardous (explosive gas with a high acute toxicity) to be employed on an industrial scale without special precautions. Use of diazomethane has been significantly reduced by the introduction of the safer and equivalent reagent trimethylsilyldiazomethane.
Hazards
Electrophilic, soluble alkylating agents are often toxic and carcinogenic, due to their tendency to alkylate DNA. This mechanism of toxicity is relevant to the function of anti-cancer drugs in the form of alkylating antineoplastic agents. Some chemical weapons such as mustard gas (sulfide of dichloroethyl) function as alkylating agents. Alkylated DNA either does not coil or uncoil properly, or cannot be processed by information-decoding enzymes.
Catalysts
Electrophilic alkylation uses Lewis acids and Brønsted acids, sometimes both. Classically, Lewis acids, e.g., aluminium trichloride, are employed when the alkyl halide are used. Brønsted acids are used when alkylating with olefins. Typical catalysts are zeolites, i.e. solid acid catalysts, and sulfuric acid. Silicotungstic acid is used to manufacture ethyl acetate by the alkylation of acetic acid by ethylene:
C2H4 + CH3CO2H -> CH3CO2C2H5
In biology
Alkylation in biology causes DNA damage. It is the transfer of alkyl groups to the nitrogenous bases. It is caused by alkylating agents such as EMS (Ethyl Methyl Sulphonate). Bifunctional alkyl groups which have two alkyl groups in them cause cross linking in DNA. Alkylation damaged ring nitrogen bases are repaired via the Base Excision Repair (BER) pathway.
Commodity chemicals
Several commodity chemicals are produced by alkylation. Included are several fundamental benzene-based feedstocks such as ethylbenzene (precursor to styrene), cumene (precursor to phenol and acetone), linear alkylbenzene sulfonates (for detergents).
Gasoline production
In a conventional oil refinery, isobutane is alkylated with low-molecular-weight alkenes (primarily a mixture of propene and butene) in the presence of a Brønsted acid catalyst, which can include solid acids (zeolites). The catalyst protonates the alkenes (propene, butene) to produce carbocations, which alkylate isobutane. The product, called "alkylate", is composed of a mixture of high-octane, branched-chain paraffinic hydrocarbons (mostly isoheptane and isooctane). Alkylate is a premium gasoline blending stock because it has exceptional antiknock properties and is clean burning. Alkylate is also a key component of avgas. By combining fluid catalytic cracking, polymerization, and alkylation, refineries can obtain a gasoline yield of 70 percent. The widespread use of sulfuric acid and hydrofluoric acid in refineries poses significant environmental risks. Ionic liquids are used in place of the older generation of strong Bronsted acids.
Dealkylation
Complementing alkylation reactions are the reverse, dealkylations. Prevalent are demethylations, which are prevalent in biology, organic synthesis, and other areas, especially for methyl ethers and methyl amines.
See also
Hydrodealkylation
Transalkylation
Alkynylation
Friedel–Crafts reaction
:Category:Alkylating agents
:Category:Ethylating agents
:Category:Methylating agents
References
External links
Macrogalleria page on polycarbonate production
Industrial processes
Oil refining
Organic reactions
Chemical processes | Alkylation | [
"Chemistry"
] | 2,076 | [
"Petroleum technology",
"Organic reactions",
"Chemical processes",
"Oil refining",
"nan",
"Chemical process engineering"
] |
487,493 | https://en.wikipedia.org/wiki/Group%208%20element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 8 is a group (column) of chemical elements in the periodic table. It consists of iron (Fe), ruthenium (Ru), osmium (Os) and hassium (Hs). "Group 8" is the modern standard designation for this group, adopted by the IUPAC in 1990. It should not be confused with "group VIIIA" in the CAS system, which is group 18 (current IUPAC), the noble gases. In the older group naming systems, this group was combined with groups 9 and 10 and called group "VIIIB" in the Chemical Abstracts Service (CAS) "U.S. system", or "VIII" in the old IUPAC (pre-1990) "European system" (and in Mendeleev's original table). The elements in this group are all transition metals that lie in the d-block of the periodic table.
While groups (columns) of the periodic table are usually named after their lightest member (as in "the oxygen group" for group 16), iron group has historically been used differently; most often, it means a set of adjacent elements on period (row) 4 of the table that includes iron, such as chromium, manganese, iron, cobalt, and nickel, or only the last three, or some other set, depending on the context.
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior.
Basic properties
The following is copied from the pages of Iron, Ruthenium, Osmium, and Hassium respectively.
Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. High-purity irons (e.g. electrolytic iron) are more resistant to corrosion.
Because it hardens platinum and palladium alloys, ruthenium is used in electrical contacts, where a thin film is sufficient to achieve the desired durability. With its similar properties to and lower cost than rhodium, electric contacts are a major use of ruthenium. The ruthenium plate is applied to the electrical contact and electrode base metal by electroplating or sputtering.
Osmium is a hard but brittle metal that remains lustrous even at high temperatures. It has a very low compressibility. Correspondingly, its bulk modulus is extremely high, reported between 395 and 462 GPa, which rivals that of diamond (443 GPa). The hardness of osmium is moderately high at 4 GPa. Because of its hardness, brittleness, low vapor pressure (the lowest of the platinum-group metals), and very high melting point (the fourth highest of all elements, after carbon, tungsten, and rhenium), solid osmium is difficult to machine, form, or work.
Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available. Though despite its radioactivity, chemists have formed hassium tetroxide and sodium hassate(VII) through various means.
Occurrence and production
In terms of mass, iron is the fourth most common element within the Earth's crust. It is found in many minerals, such as hematite, magnetite, and taconite. Iron is commercially produced by heating these minerals in a blast furnace with coke and calcium carbonate.
Ruthenium is a very rare metal in Earth's crust. It is often found in minerals such as pentlandite and pyroxinite. It can be commercially obtained as a waste product from refining nickel.
Osmium is found in osmiridium. It can also be obtained as a waste product from refining nickel.
Hassium is extremely radioactive, and as such is not found naturally in the Earth's crust. It is produced via the bombardment of lead-208 atoms with iron-58 atoms.
Biological role
Iron is a mineral used in the human body that is essential for good health. It is a component in the proteins of hemoglobin and myoglobin, both of which are responsible for transporting oxygen around the body. Iron is a part of some hormones as well. A lack of iron in the body can cause iron deficiency anemia, and an excess of iron in the body can be toxic.
Some ruthenium-containing molecules may be used to fight cancer. Normally, however, ruthenium plays no role in the human body.
Both osmium and hassium have no known biological roles.
References
Groups (periodic table) | Group 8 element | [
"Chemistry"
] | 1,112 | [
"Periodic table",
"Groups (periodic table)"
] |
487,510 | https://en.wikipedia.org/wiki/Group%2012%20element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 12, by modern IUPAC numbering, is a group of chemical elements in the periodic table. It includes zinc (Zn), cadmium (Cd), mercury (Hg), and copernicium (Cn). Formerly this group was named IIB (pronounced as "group two B", as the "II" is a Roman numeral) by CAS and old IUPAC system.
The three group 12 elements that occur naturally are zinc, cadmium and mercury. They are all widely used in electric and electronic applications, as well as in various alloys. The first two members of the group share similar properties as they are solid metals under standard conditions. Mercury is the only metal that is known to be a liquid at room temperature – as copernicium's boiling point has not yet been measured accurately enough, it is not yet known whether it is a liquid or a gas under standard conditions. While zinc is very important in the biochemistry of living organisms, cadmium and mercury are both highly toxic. As copernicium does not occur in nature, it has to be synthesized in the laboratory.
Physical and atomic properties
Like other groups of the periodic table, the members of group 12 show patterns in its electron configuration, especially the outermost shells, which result in trends in their chemical behavior:
The group 12 elements are all soft, diamagnetic, divalent metals. They have the lowest melting points among all transition metals. Zinc is bluish-white and lustrous, though most common commercial grades of the metal have a dull finish. Zinc is also referred to in nonscientific contexts as spelter. Cadmium is soft, malleable, ductile, and with a bluish-white color. Mercury is a liquid, heavy, silvery-white metal. It is the only common liquid metal at ordinary temperatures, and as compared to other metals, it is a poor conductor of heat, but a fair conductor of electricity.
The table below is a summary of the key physical properties of the group 12 elements. The data for copernicium is based on relativistic density-functional theory simulations.
Zinc is somewhat less dense than iron and has a hexagonal crystal structure. The metal is hard and brittle at most temperatures but becomes malleable between . Above , the metal becomes brittle again and can be pulverized by beating. Zinc is a fair conductor of electricity. For a metal, zinc has relatively low melting () and boiling points (). Cadmium is similar in many respects to zinc but forms complex compounds. Unlike other metals, cadmium is resistant to corrosion and as a result it is used as a protective layer when deposited on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Mercury has an exceptionally low melting temperature for a d-block metal. A complete explanation of this fact requires a deep excursion into quantum physics, but it can be summarized as follows: mercury has a unique electronic configuration where electrons fill up all the available 1s, 2s, 2p, 3s, 3p, 3d, 4s, 4p, 4d, 4f, 5s, 5p, 5d and 6s subshells. As such configuration strongly resists removal of an electron, mercury behaves similarly to noble gas elements, which form weak bonds and thus easily melting solids. The stability of the 6s shell is due to the presence of a filled 4f shell. An f shell poorly screens the nuclear charge that increases the attractive Coulomb interaction of the 6s shell and the nucleus (see lanthanide contraction). The absence of a filled inner f shell is the reason for the somewhat higher melting temperature of cadmium and zinc, although both these metals still melt easily and, in addition, have unusually low boiling points. Gold has atoms with one less 6s electron than mercury. Those electrons are more easily removed and are shared between the gold atoms forming relatively strong metallic bonds.
Zinc, cadmium and mercury form a large range of alloys. Among the zinc containing ones, brass is an alloy of zinc and copper. Other metals long known to form binary alloys with zinc are aluminium, antimony, bismuth, gold, iron, lead, mercury, silver, tin, magnesium, cobalt, nickel, tellurium and sodium. While neither zinc nor zirconium are ferromagnetic, their alloy exhibits ferromagnetism below 35 K. Cadmium is used in many kinds of solder and bearing alloys, due to a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Because it is a liquid, mercury dissolves other metals and the alloys that are formed are called amalgams. For example, such amalgams are known with gold, zinc, sodium, and many other metals. Because iron is an exception, iron flasks have been traditionally used to trade mercury. Other metals that do not form amalgams with mercury include tantalum, tungsten and platinum. Sodium amalgam is a common reducing agent in organic synthesis, and is also used in high-pressure sodium lamps. Mercury readily combines with aluminium to form a mercury-aluminium amalgam when the two pure metals come into contact. Since the amalgam reacts with air to give aluminium oxide, small amounts of mercury corrode aluminium. For this reason, mercury is not allowed aboard an aircraft under most circumstances because of the risk of it forming an amalgam with exposed aluminium parts in the aircraft.
Chemistry
Most of the chemistry has been observed only for the first three members of the group 12. The chemistry of copernicium is not well established and therefore the rest of the section deals only with zinc, cadmium and mercury.
Periodic trends
All elements in this group are metals. The similarity of the metallic radii of cadmium and mercury is an effect of the lanthanide contraction. So, the trend in this group is unlike the trend in group 2, the alkaline earths, where metallic radius increases smoothly from top to bottom of the group. All three metals have relatively low melting and boiling points, indicating that the metallic bond is relatively weak, with relatively little overlap between the valence band and the conduction band. Thus, zinc is close to the boundary between metallic and metalloid elements, which is usually placed between gallium and germanium, though gallium participates in semi-conductors such as gallium arsenide.
Zinc and cadmium are electropositive while mercury is not. As a result, zinc and cadmium metal are good reducing agents. The elements of group 12 have an oxidation state of +2 in which the ions have the rather stable d10 electronic configuration, with a full sub-shell. However, mercury can easily be reduced to the +1 oxidation state; usually, as in the ion , two mercury(I) ions come together to form a metal-metal bond and a diamagnetic species. Cadmium can also form species such as [Cd2Cl6]4− in which the metal's oxidation state is +1. Just as with mercury, the formation of a metal-metal bond results in a diamagnetic compound in which there are no unpaired electrons; thus, making the species very reactive. Zinc(I) is known mostly in the gas phase, in such compounds as linear Zn2Cl2, analogous to calomel. In the solid phase, the rather exotic compound decamethyldizincocene (Cp*Zn–ZnCp*) is known.
Classification
The elements in group 12 are usually considered to be d-block elements, but not transition elements as the d-shell is full. Some authors classify these elements as main-group elements because the valence electrons are in ns2 orbitals. Nevertheless, they share many characteristics with the neighboring group 11 elements on the periodic table, which are almost universally considered to be transition elements. For example, zinc shares many characteristics with the neighboring transition metal, copper. Zinc complexes merit inclusion in the Irving-Williams series as zinc forms many complexes with the same stoichiometry as complexes of copper(II), albeit with smaller stability constants. There is little similarity between cadmium and silver as compounds of silver(II) are rare and those that do exist are very strong oxidizing agents. Likewise the common oxidation state for gold is +3, which precludes there being much common chemistry between mercury and gold, though there are similarities between mercury(I) and gold(I) such as the formation of linear dicyano complexes, [M(CN)2]−. According to IUPAC's definition of transition metal as an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell, zinc and cadmium are not transition metals, while mercury is. This is because only mercury is known to have a compound where its oxidation state is higher than +2, in mercury(IV) fluoride (though its existence is disputed, as later experiments trying to confirm its synthesis could not find evidence of HgF4). However, this classification is based on one highly atypical compound seen at non-equilibrium conditions and is at odds to mercury's more typical chemistry, and Jensen has suggested that it would be better to regard mercury as not being a transition metal.
Relationship with the alkaline earth metals
Although group 12 lies in the d-block of the modern 18-column periodic table, the d electrons of zinc, cadmium, and (almost always) mercury behave as core electrons and do not take part in bonding. This behavior is similar to that of the main-group elements, but is in stark contrast to that of the neighboring group 11 elements (copper, silver, and gold), which also have filled d-subshells in their ground-state electron configuration but behave chemically as transition metals. For example, the bonding in chromium(II) sulfide (CrS) involves mainly the 3d electrons; that in iron(II) sulfide (FeS) involves both the 3d and 4s electrons; but that of zinc sulfide (ZnS) involves only the 4s electrons and the 3d electrons behave as core electrons. Indeed, useful comparison can be made between their properties and the first two members of group 2, beryllium and magnesium, and in earlier short-form periodic table layouts, this relationship is illustrated more clearly. For instance, zinc and cadmium are similar to beryllium and magnesium in their atomic radii, ionic radii, electronegativities, and also in the structure of their binary compounds and their ability to form complex ions with many nitrogen and oxygen ligands, such as complex hydrides and amines. However, beryllium and magnesium are small atoms, unlike the heavier alkaline earth metals and like the group 12 elements (which have a greater nuclear charge but the same number of valence electrons), and the periodic trends down group 2 from beryllium to radium (similar to that of the alkali metals) are not as smooth when going down from beryllium to mercury (which is more similar to that of the p-block main groups) due to the d-block and lanthanide contractions. It is also the d-block and lanthanide contractions that give mercury many of its distinctive properties.
Compounds
All three metal ions form many tetrahedral species, such as . Both zinc and cadmium can also form octahedral complexes such as the aqua ions [M(H2O)6]2+ which are present in aqueous solutions of salts of these metals. Covalent character is achieved by using the s and p orbitals. Mercury, however, rarely exceeds a coordination number of four. Coordination numbers of 2, 3, 5, 7 and 8 are also known.
History
The elements of group 12 have been found throughout history, being used since ancient times to being discovered in laboratories. The group itself has not acquired a trivial name, but it has been called group IIB in the past.
Zinc
Zinc has been found being used in impure forms in ancient times as well as in alloys such as brass that have been found to be over 2000 years old. Zinc was distinctly recognized as a metal under the designation of Fasada in the medical Lexicon ascribed to the Hindu king Madanapala (of Taka dynasty) and written about the year 1374. The metal was also of use to alchemists. The name of the metal was first documented in the 16th century, and is probably derived from the German for the needle-like appearance of metallic crystals.
The isolation of metallic zinc in the West may have been achieved independently by several people in the 17th century. German chemist Andreas Marggraf is usually given credit for discovering pure metallic zinc in a 1746 experiment by heating a mixture of calamine and charcoal in a closed vessel without copper to obtain a metal. Experiments on frogs by the Italian doctor Luigi Galvani in 1780 with brass paved the way for the discovery of electrical batteries, galvanization and cathodic protection. In 1799, Galvani's friend, Alessandro Volta, invented the Voltaic pile. The biological importance of zinc was not discovered until 1940 when carbonic anhydrase, an enzyme that scrubs carbon dioxide from blood, was shown to have zinc in its active site.
Cadmium
In 1817, cadmium was discovered in Germany as an impurity in zinc carbonate minerals (calamine) by Friedrich Stromeyer and Karl Samuel Leberecht Hermann. It was named after the Latin cadmia for "calamine", a cadmium-bearing mixture of minerals, which was in turn named after the Greek mythological character, Κάδμος Cadmus, the founder of Thebes. Stromeyer eventually isolated cadmium metal by roasting and reduction of the sulfide.
In 1927, the International Conference on Weights and Measures redefined the meter in terms of a red cadmium spectral line (1 m = 1,553,164.13 wavelengths). This definition has since been changed (see krypton). At the same time, the International Prototype Meter was used as standard for the length of a meter until 1960, when at the General Conference on Weights and Measures the meter was defined in terms of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in vacuum.
Mercury
Mercury has been found in Egyptian tombs which have been dated back to 1500 BC, where mercury was used in cosmetics. It was also used by the ancient Chinese who believed it would improve and prolong health. By 500 BC mercury was used to make amalgams (Medieval Latin amalgama, "alloy of mercury") with other metals. Alchemists thought of mercury as the First Matter from which all metals were formed. They believed that different metals could be produced by varying the quality and quantity of sulfur contained within the mercury. The purest of these was gold, and mercury was called for in attempts at the transmutation of base (or impure) metals into gold, which was the goal of many alchemists.
Hg is the modern chemical symbol for mercury. It comes from hydrargyrum, a Latinized form of the Greek word Ύδραργυρος (hydrargyros), which is a compound word meaning "water-silver" (hydr- = water, argyros = silver) — since it is liquid like water and shiny like silver. The element was named after the Roman god Mercury, known for speed and mobility. It is associated with the planet Mercury; the astrological symbol for the planet is also one of the alchemical symbols for the metal. Mercury is the only metal for which the alchemical planetary name became the common name.
Copernicium
The heaviest known group 12 element, copernicium, was first created on February 9, 1996, at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany, by Sigurd Hofmann, Victor Ninov et al. It was then officially named by the International Union of Pure and Applied Chemistry (IUPAC) after Nicolaus Copernicus on February 19, 2010, the 537th anniversary of Copernicus' birth.
Occurrence
Like in most other d-block groups, the abundance in Earth's crust of group 12 elements decreases with higher atomic number. Zinc is with 65 parts per million (ppm) the most abundant in the group while cadmium with 0.1 ppm and mercury with 0.08 ppm are orders of magnitude less abundant. Copernicium, as a synthetic element with a half-life of a few minutes, may only be present in the laboratories where it was produced.
Group 12 metals are chalcophiles, meaning the elements have low affinities for oxides and prefer to bond with sulfides. Chalcophiles formed as the crust solidified under the reducing conditions of the early Earth's atmosphere. The commercially most important minerals of group 12 elements are sulfide minerals. Sphalerite, which is a form of zinc sulfide, is the most heavily mined zinc-containing ore because its concentrate contains 60–62% zinc. No significant deposits of cadmium-containing ores are known. Greenockite (CdS), the only cadmium mineral of importance, is nearly always associated with sphalerite (ZnS). This association is caused by the geochemical similarity between zinc and cadmium which makes geological separation unlikely. As a consequence, cadmium is produced mainly as a byproduct from mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. One place where metallic cadmium can be found is the Vilyuy River basin in Siberia. Although mercury is an extremely rare element in the Earth's crust, because it does not blend geochemically with those elements that constitute the majority of the crustal mass, mercury ores can be highly concentrated considering the element's abundance in ordinary rock. The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury (12,000 times average crustal abundance). It is found either as a native metal (rare) or in cinnabar (HgS), corderoite, livingstonite and other minerals, with cinnabar being the most common ore.
While mercury and zinc minerals are found in large enough quantities to be mined, cadmium is too similar to zinc and therefore is always present in small quantities in zinc ores from where it is recovered. Identified world zinc resources total about 1.9 billion tonnes. Large deposits are in Australia, Canada and the United States with the largest reserves in Iran. At the current rate of consumption, these reserves are estimated to be depleted sometime between 2027 and 2055. About 346 million tonnes have been extracted throughout history to 2002, and one estimate found that about 109 million tonnes of that remains in use. In 2005, China was the top producer of mercury with almost two-thirds global share followed by Kyrgyzstan. Several other countries are believed to have unrecorded production of mercury from copper electrowinning processes and by recovery from effluents. Because of the high toxicity of mercury, both the mining of cinnabar and refining for mercury are hazardous and historic causes of mercury poisoning.
Production
Zinc is the fourth most common metal in use, trailing only iron, aluminium, and copper with an annual production of about 10 million tonnes. Worldwide, 95% of the zinc is mined from sulfidic ore deposits, in which sphalerite (ZnS) is nearly always mixed with the sulfides of copper, lead and iron. Zinc metal is produced using extractive metallurgy. Roasting converts the zinc sulfide concentrate produced during processing to zinc oxide. For further processing two basic methods are used: pyrometallurgy or electrowinning. Pyrometallurgy processing reduces zinc oxide with carbon or carbon monoxide at into the metal, which is distilled as zinc vapor. The zinc vapor is collected in a condenser. Electrowinning processing leaches zinc from the ore concentrate by sulfuric acid. After this step electrolysis is used to produce zinc metal.
Cadmium is a common impurity in zinc ores, and it is most isolated during the production of zinc. Some zinc ores concentrates from sulfidic zinc ores contain up to 1.4% of cadmium. Cadmium is isolated from the zinc produced from the flue dust by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated out of the electrolysis solution.
The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury, with cinnabar (HgS) being the most common ore in the deposits.
Mercury is extracted by heating cinnabar in a current of air and condensing the vapor.
Superheavy elements such as copernicium are produced by bombarding lighter elements in particle accelerators that induces fusion reactions. Whereas most of the isotopes of copernicium can be synthesized directly this way, some heavier ones have only been observed as decay products of elements with higher atomic numbers. The first fusion reaction to produce copernicium was performed by GSI in 1996, who reported the detection of two decay chains of copernicium-277 (though one was later retracted, as it had been based on data fabricated by Victor Ninov):
+ → +
Applications
Due to the physical similarities which they share, the group 12 elements can be found in many common situations. Zinc and cadmium are commonly used as anti-corrosion (galvanization) agents as they will attract all local oxidation until they completely corrode. These protective coatings can be applied to other metals by hot-dip galvanizing a substance into the molten form of the metal, or through the process of electroplating which may be passivated by the use of chromate salts. Group 12 elements are also used in electrochemistry as they may act as an alternative to the standard hydrogen electrode in addition to being a secondary reference electrode.
In the US, zinc is used predominantly for galvanizing (55%) and for brass, bronze and other alloys (37%). The relative reactivity of zinc and its ability to attract oxidation to itself makes it an efficient sacrificial anode in cathodic protection (CP). For example, cathodic protection of a buried pipeline can be achieved by connecting anodes made from zinc to the pipe. Zinc acts as the anode (negative terminus) by slowly corroding away as it passes electric current to the steel pipeline. Zinc is used to cathodically protect metals that are exposed to sea water from corrosion.
Zinc is used as an anode material for batteries such as in zinc–carbon batteries or zinc–air battery/fuel cells.
A widely used alloy which contains zinc is brass, in which copper is alloyed with anywhere from 3% to 45% zinc, depending upon the type of brass. Brass is generally more ductile and stronger than copper and has superior corrosion resistance. These properties make it useful in communication equipment, hardware, musical instruments, and water valves. Other widely used alloys that contain zinc include nickel silver, typewriter metal, soft and aluminium solder, and commercial bronze. Alloys of primarily zinc with small amounts of copper, aluminium, and magnesium are useful in die casting as well as spin casting, especially in the automotive, electrical, and hardware industries. These alloys are marketed under the name Zamak. Roughly one quarter of all zinc output in the United States (2009) is consumed in the form of zinc compounds, a variety of which are used industrially.
Cadmium has many common industrial uses as it is a key component in battery production, is present in cadmium pigments, coatings, and is commonly used in electroplating. In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. The European Union banned the use of cadmium in electronics in 2004 with several exceptions but reduced the allowed content of cadmium in electronics to 0.002%. Cadmium electroplating, consuming 6% of the global production, can be found in the aircraft industry due to the ability to resist corrosion when applied to steel components.
Mercury is used primarily for the manufacture of industrial chemicals or for electrical and electronic applications. It is used in some thermometers, especially ones which are used to measure high temperatures. A still increasing amount is used as gaseous mercury in fluorescent lamps, while most of the other applications are slowly phased out due to health and safety regulations, and is in some applications replaced with less toxic but considerably more expensive Galinstan alloy. Mercury and its compounds have been used in medicine, although they are much less common today than they once were, now that the toxic effects of mercury and its compounds are more widely understood. It is still used as an ingredient in dental amalgams. In the late 20th century the largest use of mercury was in the mercury cell process (also called the Castner-Kellner process) in the production of chlorine and caustic soda.
Copernicium has no use other than research due to its very high radioactivity.
Biological role and toxicity
The group 12 elements have multiple effects on biological organisms as cadmium and mercury are toxic while zinc is required by most plants and animals in trace amounts.
Zinc is an essential trace element, necessary for plants, animals, and microorganisms. It is "typically the second most abundant transition metal in organisms" after iron and it is the only metal which appears in all enzyme classes. There are 2–4 grams of zinc distributed throughout the human body, and it plays "ubiquitous biological roles". A 2006 study estimated that about 10% of human proteins (2800) potentially bind zinc, in addition to hundreds which transport and traffic zinc. In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men. Harmful excessive supplementation may be a problem and should probably not exceed 20 mg/day in healthy people, although the U.S. National Research Council set a Tolerable Upper Intake of 40 mg/day.
Mercury and cadmium are toxic and may cause environmental damage if they enter rivers or rain water. This may result in contaminated crops as well as the bioaccumulation of mercury in a food chain leading to an increase in illnesses caused by mercury and cadmium poisoning.
Notes
References
Bibliography
Groups (periodic table) | Group 12 element | [
"Chemistry"
] | 5,623 | [
"Periodic table",
"Groups (periodic table)"
] |
487,518 | https://en.wikipedia.org/wiki/Group%2010%20element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 10, numbered by current IUPAC style, is the group of chemical elements in the periodic table that consists of nickel (Ni), palladium (Pd), platinum (Pt), and darmstadtium (Ds). All are d-block transition metals. All known isotopes of darmstadtium are radioactive with short half-lives, and are not known to occur in nature; only minute quantities have been synthesized in laboratories.
Characteristics
Chemical properties
The ground state electronic configurations of palladium and platinum are exceptions to Madelung's rule. According to Madelung's rule, the electronic configuration of palladium and platinum are expected to be [Kr] 5s2 4d8 and [Xe] 4f14 5d8 6s2 respectively. However, the 5s orbital of palladium is empty, and the 6s orbital of platinum is only partially filled. The relativistic stabilization of the 7s orbital is the explanation to the predicted electron configuration of darmstadtium, which, unusually for this group, conforms to that predicted by the Aufbau principle. In general, the ground state electronic configurations of heavier atoms and transition metals are more difficult to predict.
Group 10 elements are observed in oxidation states of +1 to +4. The +2 oxidation state is common for nickel and palladium, while +2 and +4 are common for platinum. Oxidation states of -2 and -1 have also been observed for nickel and platinum, and an oxidation state of +5 has been observed for palladium and platinum. Platinum has also been observed in oxidations states of -3 and +6. Theory suggests that platinum may produce a +10 oxidation state under specific conditions, but this remains to be shown empirically.
Physical properties
Darmstadtium has not been isolated in pure form, and its properties have not been conclusively observed; only nickel, palladium, and platinum have had their properties experimentally confirmed. Nickel, platinum, and palladium are typically silvery-white transition metals, and can also be readily obtained in powdered form. They are hard, have a high luster, and are highly ductile. Group 10 elements are resistant to tarnish (oxidation) at STP, are refractory, and have high melting and boiling points.
Occurrence and production
Nickel occurs naturally in ores, and it is the earth's 22nd most abundant element. Two prominent groups of ores from which it can be extracted are laterites and sulfide ores. Indonesia holds the world's largest nickel reserve, and is also its largest producer.
History
Discoveries of the elements
Nickel
The use of nickel, often mistaken for copper, dates as far back as 3500 BCE. Nickel has been discovered in a dagger dating to 3100 BCE, in Egyptian iron beads, a bronze reamer found in Syria dating to 3500–3100 BCE, as copper-nickel alloys in coins minted in Bactria, in weapons and pots near the Senegal river, and as agricultural tools used by Mexicans in the 1700s. There is evidence to suggest that the use of nickel in antiquity came from meteoric iron, such as in the Sumerian name for iron an-bar ("fire from heaven") or in Hittite texts that describe iron's heavenly origins. Nickel was not formally named as an element until A. F. Cronstedt isolated the impure metal from "kupfernickel" (Old Nick's copper) in 1751. In 1804, J. B. Richter determined the physical properties of nickel using a purer sample, describing the metal as ductile and strong with a high melting point. The strength of nickel-steel alloys were described in 1889 and since then, nickel steels saw extensive use first for military applications and then in the development of corrosion- and heat-resistant alloys during the 20th century.
Palladium
Palladium was isolated by William Hyde Wollaston in 1803 while he was working on refining platinum metals. Palladium was in a residue left behind after platinum was precipitated out of a solution of hydrochloric acid and nitric acid as (NH4)PtCl6. Wollaston named it after the recently discovered asteroid 2 Pallas and anonymously sold small samples of the metal to a shop, which advertised it as a "new noble metal" called "Palladium, or New Silver". This raised doubts about its purity, source, and the identity of its discoverer, causing controversy. He eventually identified himself and read his paper on the discovery of palladium to the Royal Society in 1805.
Platinum
Prior to its formal discovery, platinum was used in jewelry by native Ecuadorians of the province of Esmeraldas. The metal was found in small grains mixed with gold in river deposits, which the workers sintered with gold to form small trinkets such as rings. The first published report of platinum was written by Antonio de Ulloa, a Spanish mathematician, astronomer, and naval officer who observed "platina" (little silver) in the gold mines of Ecuador during a French expedition in 1736. Miners found the "platina" difficult to separate from gold, leading to the abandonment of those mines. Charles Wood (ironmaster) brought samples of the metal to England in 1741 and investigated its properties, observing its high melting point and its presence as small white grains in black metallic sand. Interest in the metal grew after Wood's findings were reported to the Royal Society. Henrik Teofilus Scheffer, a Swedish scientist, referred to the precious metal as "white gold" and the "seventh metal" in 1751, reporting its high durability, high density, and that it melted easily when mixed with copper or arsenic. Both Pierre-François Chabaneau (during the 1780s) and William Hyde Wollaston (during the 1800s) developed a powder metallurgy technique to produce malleable platinum, but kept their process a secret. However, their platinum ingots were brittle and tended to crack easily, likely due to impurities. In the 1800s, furnaces capable of sustaining high temperatures were invented, which eventually replaced powder metallurgy and introduced melted platinum to the market.
Applications
The group 10 metals share several uses. These include:
Decorative purposes, in the form of jewelry and electroplating.
Catalysts in a variety of chemical reactions.
Metal alloys.
Electrical components, due to their predictable changes in electrical resistivity with regard to temperature.
Superconductors, as components in alloys with other metals.
Biological role and toxicity
Platinum complexes are commonly used in chemotherapy as anticancer drugs due to their antitumor activity. Palladium complexes also show marginal antitumor activity, yet its poor activity is labile compared to platinum complexes.
See also
Platinum group
Notes and references
Groups (periodic table) | Group 10 element | [
"Chemistry"
] | 1,459 | [
"Periodic table",
"Groups (periodic table)"
] |
487,540 | https://en.wikipedia.org/wiki/Prime%20ring | In abstract algebra, a nonzero ring R is a prime ring if for any two elements a and b of R, arb = 0 for all r in R implies that either a = 0 or b = 0. This definition can be regarded as a simultaneous generalization of both integral domains and simple rings.
Although this article discusses the above definition, prime ring may also refer to the minimal non-zero subring of a field, which is generated by its identity element 1, and determined by its characteristic. For a characteristic 0 field, the prime ring is the integers, and for a characteristic p field (with p a prime number) the prime ring is the finite field of order p (cf. Prime field).
Equivalent definitions
A ring R is prime if and only if the zero ideal {0} is a prime ideal in the noncommutative sense.
This being the case, the equivalent conditions for prime ideals yield the following equivalent conditions for R to be a prime ring:
For any two ideals A and B of R, AB = {0} implies A = {0} or B = {0}.
For any two right ideals A and B of R, AB = {0} implies A = {0} or B = {0}.
For any two left ideals A and B of R, AB = {0} implies A = {0} or B = {0}.
Using these conditions it can be checked that the following are equivalent to R being a prime ring:
All nonzero right ideals are faithful as right R-modules.
All nonzero left ideals are faithful as left R-modules.
Examples
Any domain is a prime ring.
Any simple ring is a prime ring, and more generally: every left or right primitive ring is a prime ring.
Any matrix ring over an integral domain is a prime ring. In particular, the ring of 2 × 2 integer matrices is a prime ring.
Properties
A commutative ring is a prime ring if and only if it is an integral domain.
A nonzero ring is prime if and only if the monoid of its ideals lacks zero divisors.
The ring of matrices over a prime ring is again a prime ring.
Notes
References
Ring theory | Prime ring | [
"Mathematics"
] | 454 | [
"Fields of abstract algebra",
"Ring theory"
] |
487,541 | https://en.wikipedia.org/wiki/Matrix%20ring | In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication. The set of all matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and ). Some sets of infinite matrices form infinite matrix rings. A subring of a matrix ring is again a matrix ring. Over a rng, one can form matrix rngs.
When R is a commutative ring, the matrix ring Mn(R) is an associative algebra over R, and may be called a matrix algebra. In this setting, if M is a matrix and r is in R, then the matrix rM is the matrix M with each of its entries multiplied by r.
Examples
The set of all square matrices over R, denoted Mn(R). This is sometimes called the "full ring of n-by-n matrices".
The set of all upper triangular matrices over R.
The set of all lower triangular matrices over R.
The set of all diagonal matrices over R. This subalgebra of Mn(R) is isomorphic to the direct product of n copies of R.
For any index set I, the ring of endomorphisms of the right R-module is isomorphic to the ring of column finite matrices whose entries are indexed by and whose columns each contain only finitely many nonzero entries. The ring of endomorphisms of M considered as a left R-module is isomorphic to the ring of row finite matrices.
If R is a Banach algebra, then the condition of row or column finiteness in the previous point can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring. This idea can be used to represent operators on Hilbert spaces, for example.
The intersection of the row-finite and column-finite matrix rings forms a ring .
If R is commutative, then Mn(R) has a structure of a *-algebra over R, where the involution * on Mn(R) is matrix transposition.
If A is a C*-algebra, then Mn(A) is another C*-algebra. If A is non-unital, then Mn(A) is also non-unital. By the Gelfand–Naimark theorem, there exists a Hilbert space H and an isometric *-isomorphism from A to a norm-closed subalgebra of the algebra B(H) of continuous operators; this identifies Mn(A) with a subalgebra of B(H⊕n). For simplicity, if we further suppose that H is separable and A B(H) is a unital C*-algebra, we can break up A into a matrix ring over a smaller C*-algebra. One can do so by fixing a projection p and hence its orthogonal projection 1 − p; one can identify A with , where matrix multiplication works as intended because of the orthogonality of the projections. In order to identify A with a matrix ring over a C*-algebra, we require that p and 1 − p have the same "rank"; more precisely, we need that p and 1 − p are Murray–von Neumann equivalent, i.e., there exists a partial isometry u such that and . One can easily generalize this to matrices of larger sizes.
Complex matrix algebras Mn(C) are, up to isomorphism, the only finite-dimensional simple associative algebras over the field C of complex numbers. Prior to the invention of matrix algebras, Hamilton in 1853 introduced a ring, whose elements he called biquaternions and modern authors would call tensors in , that was later shown to be isomorphic to M2(C). One basis of M2(C) consists of the four matrix units (matrices with one 1 and all other entries 0); another basis is given by the identity matrix and the three Pauli matrices.
A matrix ring over a field is a Frobenius algebra, with Frobenius form given by the trace of the product: .
Structure
The matrix ring Mn(R) can be identified with the ring of endomorphisms of the free right R-module of rank n; that is, . Matrix multiplication corresponds to composition of endomorphisms.
The ring Mn(D) over a division ring D is an Artinian simple ring, a special type of semisimple ring. The rings and are not simple and not Artinian if the set I is infinite, but they are still full linear rings.
The Artin–Wedderburn theorem states that every semisimple ring is isomorphic to a finite direct product , for some nonnegative integer r, positive integers ni, and division rings Di.
When we view Mn(C) as the ring of linear endomorphisms of Cn, those matrices which vanish on a given subspace V form a left ideal. Conversely, for a given left ideal I of Mn(C) the intersection of null spaces of all matrices in I gives a subspace of Cn. Under this construction, the left ideals of Mn(C) are in bijection with the subspaces of Cn.
There is a bijection between the two-sided ideals of Mn(R) and the two-sided ideals of R. Namely, for each ideal I of R, the set of all matrices with entries in I is an ideal of Mn(R), and each ideal of Mn(R) arises in this way. This implies that Mn(R) is simple if and only if R is simple. For , not every left ideal or right ideal of Mn(R) arises by the previous construction from a left ideal or a right ideal in R. For example, the set of matrices whose columns with indices 2 through n are all zero forms a left ideal in Mn(R).
The previous ideal correspondence actually arises from the fact that the rings R and Mn(R) are Morita equivalent. Roughly speaking, this means that the category of left R-modules and the category of left Mn(R)-modules are very similar. Because of this, there is a natural bijective correspondence between the isomorphism classes of left R-modules and left Mn(R)-modules, and between the isomorphism classes of left ideals of R and left ideals of Mn(R). Identical statements hold for right modules and right ideals. Through Morita equivalence, Mn(R) inherits any Morita-invariant properties of R, such as being simple, Artinian, Noetherian, prime.
Properties
If S is a subring of R, then Mn(S) is a subring of Mn(R). For example, Mn(Z) is a subring of Mn(Q).
The matrix ring Mn(R) is commutative if and only if , , or R is commutative and . In fact, this is true also for the subring of upper triangular matrices. Here is an example showing two upper triangular matrices that do not commute, assuming in R:
and
For , the matrix ring Mn(R) over a nonzero ring has zero divisors and nilpotent elements; the same holds for the ring of upper triangular matrices. An example in matrices would be
The center of Mn(R) consists of the scalar multiples of the identity matrix, In, in which the scalar belongs to the center of R.
The unit group of Mn(R), consisting of the invertible matrices under multiplication, is denoted GLn(R).
If F is a field, then for any two matrices A and B in Mn(F), the equality implies . This is not true for every ring R though. A ring R whose matrix rings all have the mentioned property is known as a stably finite ring .
Matrix semiring
In fact, R needs to be only a semiring for Mn(R) to be defined. In this case, Mn(R) is a semiring, called the matrix semiring. Similarly, if R is a commutative semiring, then Mn(R) is a .
For example, if R is the Boolean semiring (the two-element Boolean algebra with ), then Mn(R) is the semiring of binary relations on an n-element set with union as addition, composition of relations as multiplication, the empty relation (zero matrix) as the zero, and the identity relation (identity matrix) as the unity.
See also
Central simple algebra
Clifford algebra
Hurwitz's theorem (normed division algebras)
Generic matrix ring
Sylvester's law of inertia
Citations
References
, corrected 5th printing
Algebraic structures
Ring theory
Matrix theory | Matrix ring | [
"Mathematics"
] | 1,850 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
487,599 | https://en.wikipedia.org/wiki/Quasi-Monte%20Carlo%20method | In numerical analysis, the quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences (also called quasi-random sequences or sub-random sequences) to achieve variance reduction. This is in contrast to the regular Monte Carlo method or Monte Carlo integration, which are based on sequences of pseudorandom numbers.
Monte Carlo and quasi-Monte Carlo methods are stated in a similar way.
The problem is to approximate the integral of a function f as the average of the function evaluated at a set of points x1, ..., xN:
Since we are integrating over the s-dimensional unit cube, each xi is a vector of s elements. The difference between quasi-Monte Carlo and Monte Carlo is the way the xi are chosen. Quasi-Monte Carlo uses a low-discrepancy sequence such as the Halton sequence, the Sobol sequence, or the Faure sequence, whereas Monte Carlo uses a pseudorandom sequence. The advantage of using low-discrepancy sequences is a faster rate of convergence. Quasi-Monte Carlo has a rate of convergence close to O(1/N), whereas the rate for the Monte Carlo method is O(N−0.5).
The Quasi-Monte Carlo method recently became popular in the area of mathematical finance or computational finance. In these areas, high-dimensional numerical integrals, where the integral should be evaluated within a threshold ε, occur frequently. Hence, the Monte Carlo method and the quasi-Monte Carlo method are beneficial in these situations.
Approximation error bounds of quasi-Monte Carlo
The approximation error of the quasi-Monte Carlo method is bounded by a term proportional to the discrepancy of the set x1, ..., xN. Specifically, the Koksma–Hlawka inequality states that the error
is bounded by
where V(f) is the Hardy–Krause variation of the function f (see Morokoff and Caflisch (1995) for the detailed definitions). DN is the so-called star discrepancy of the set (x1,...,xN) and is defined as
where Q is a rectangular solid in [0,1]s with sides parallel to the coordinate axes. The inequality can be used to show that the error of the approximation by the quasi-Monte Carlo method is , whereas the Monte Carlo method has a probabilistic error of . Thus, for sufficiently large , quasi-Monte Carlo will always outperform random Monte Carlo. However, grows exponentially quickly with the dimension, meaning a poorly-chosen sequence can be much worse than Monte Carlo in high dimensions. In practice, it is almost always possible to select an appropriate low-discrepancy sequence, or apply an appropriate transformation to the integrand, to ensure that quasi-Monte Carlo performs at least as well as Monte Carlo (and often much better).
Monte Carlo and quasi-Monte Carlo for multidimensional integrations
For one-dimensional integration, quadrature methods such as the trapezoidal rule, Simpson's rule, or Newton–Cotes formulas are known to be efficient if the function is smooth. These approaches can be also used for multidimensional integrations by repeating the one-dimensional integrals over multiple dimensions. However, the number of function evaluations grows exponentially as s, the number of dimensions, increases. Hence, a method that can overcome this curse of dimensionality should be used for multidimensional integrations. The standard Monte Carlo method is frequently used when the quadrature methods are difficult or expensive to implement. Monte Carlo and quasi-Monte Carlo methods are accurate and relatively fast when the dimension is high, up to 300 or higher.
Morokoff and Caflisch studied the performance of Monte Carlo and quasi-Monte Carlo methods for integration. In the paper, Halton, Sobol, and Faure sequences for quasi-Monte Carlo are compared with the standard Monte Carlo method using pseudorandom sequences. They found that the Halton sequence performs best for dimensions up to around 6; the Sobol sequence performs best for higher dimensions; and the Faure sequence, while outperformed by the other two, still performs better than a pseudorandom sequence.
However, Morokoff and Caflisch gave examples where the advantage of the quasi-Monte Carlo is less than expected theoretically. Still, in the examples studied by Morokoff and Caflisch, the quasi-Monte Carlo method did yield a more accurate result than the Monte Carlo method with the same number of points. Morokoff and Caflisch remark that the advantage of the quasi-Monte Carlo method is greater if the integrand is smooth, and the number of dimensions s of the integral is small.
Drawbacks of quasi-Monte Carlo
Lemieux mentioned the drawbacks of quasi-Monte Carlo:
In order for to be smaller than , needs to be small and needs to be large (e.g. ). For large s, depending on the value of N, the discrepancy of a point set from a low-discrepancy generator might be not smaller than for a random set.
For many functions arising in practice, (e.g. if Gaussian variables are used).
We only know an upper bound on the error (i.e., ε ≤ V(f) DN) and it is difficult to compute and .
In order to overcome some of these difficulties, we can use a randomized quasi-Monte Carlo method.
Randomization of quasi-Monte Carlo
Since the low discrepancy sequence are not random, but deterministic, quasi-Monte Carlo method can be seen as a deterministic algorithm or derandomized algorithm. In this case, we only have the bound (e.g., ε ≤ V(f) DN) for error, and the error is hard to estimate. In order to recover our ability to analyze and estimate the variance, we can randomize the method (see randomization for the general idea). The resulting method is called the randomized quasi-Monte Carlo method and can be also viewed as a variance reduction technique for the standard Monte Carlo method. Among several methods, the simplest transformation procedure is through random shifting. Let {x1,...,xN} be the point set from the low discrepancy sequence. We sample s-dimensional random vector U and mix it with {x1, ..., xN}. In detail, for each xj, create
and use the sequence instead of . If we have R replications for Monte Carlo, sample s-dimensional random vector U for each replication. Randomization allows to give an estimate of the variance while still using quasi-random sequences. Compared to pure quasi Monte-Carlo, the number of samples of the quasi random sequence will be divided by R for an equivalent computational cost, which reduces the theoretical convergence rate. Compared to standard Monte-Carlo, the variance and the computation speed are slightly better from the experimental results in Tuffin (2008)
See also
References
R. E. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica vol. 7, Cambridge University Press, 1998, pp. 1–49.
Josef Dick and Friedrich Pillichshammer, Digital Nets and Sequences. Discrepancy Theory and Quasi-Monte Carlo Integration, Cambridge University Press, Cambridge, 2010,
Gunther Leobacher and Friedrich Pillichshammer, Introduction to quasi-Monte Carlo Integration and Applications, Compact Textbooks in Mathematics, Birkhäuser, 2014,
Michael Drmota and Robert F. Tichy, Sequences, discrepancies and applications, Lecture Notes in Math., 1651, Springer, Berlin, 1997,
William J. Morokoff and Russel E. Caflisch, Quasi-random sequences and their discrepancies, SIAM J. Sci. Comput. 15 (1994), no. 6, 1251–1279 (At CiteSeer:)
Harald Niederreiter. Random Number Generation and Quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992.
Harald G. Niederreiter, Quasi-Monte Carlo methods and pseudo-random numbers, Bull. Amer. Math. Soc. 84 (1978), no. 6, 957–1041
Oto Strauch and Štefan Porubský, Distribution of Sequences: A Sampler, Peter Lang Publishing House, Frankfurt am Main 2005,
External links
The MCQMC Wiki page contains a lot of free online material on Monte Carlo and quasi-Monte Carlo methods
A very intuitive and comprehensive introduction to Quasi-Monte Carlo methods
Monte Carlo methods
Low-discrepancy sequences | Quasi-Monte Carlo method | [
"Physics"
] | 1,796 | [
"Monte Carlo methods",
"Computational physics"
] |
487,627 | https://en.wikipedia.org/wiki/Domain%20%28ring%20theory%29 | In algebra, a domain is a nonzero ring in which implies or . (Sometimes such a ring is said to "have the zero-product property".) Equivalently, a domain is a ring in which 0 is the only left zero divisor (or equivalently, the only right zero divisor). A commutative domain is called an integral domain. Mathematical literature contains multiple variants of the definition of "domain".
Examples and non-examples
The ring is not a domain, because the images of 2 and 3 in this ring are nonzero elements with product 0. More generally, for a positive integer , the ring is a domain if and only if is prime.
A finite domain is automatically a finite field, by Wedderburn's little theorem.
The quaternions form a noncommutative domain. More generally, any division ring is a domain, since every nonzero element is invertible.
The set of all Lipschitz quaternions, that is, quaternions of the form where a, b, c, d are integers, is a noncommutative subring of the quaternions, hence a noncommutative domain.
Similarly, the set of all Hurwitz quaternions, that is, quaternions of the form where a, b, c, d are either all integers or all half-integers, is a noncommutative domain.
A matrix ring Mn(R) for n ≥ 2 is never a domain: if R is nonzero, such a matrix ring has nonzero zero divisors and even nilpotent elements other than 0. For example, the square of the matrix unit E12 is 0.
The tensor algebra of a vector space, or equivalently, the algebra of polynomials in noncommuting variables over a field, is a domain. This may be proved using an ordering on the noncommutative monomials.
If R is a domain and S is an Ore extension of R then S is a domain.
The Weyl algebra is a noncommutative domain.
The universal enveloping algebra of any Lie algebra over a field is a domain. The proof uses the standard filtration on the universal enveloping algebra and the Poincaré–Birkhoff–Witt theorem.
Group rings and the zero divisor problem
Suppose that G is a group and K is a field. Is the group ring a domain? The identity
shows that an element g of finite order induces a zero divisor in R. The zero divisor problem asks whether this is the only obstruction; in other words,
Given a field K and a torsion-free group G, is it true that K[G] contains no zero divisors?
No counterexamples are known, but the problem remains open in general (as of 2017).
For many special classes of groups, the answer is affirmative. Farkas and Snider proved in 1976 that if G is a torsion-free polycyclic-by-finite group and then the group ring K[G] is a domain. Later (1980) Cliff removed the restriction on the characteristic of the field. In 1988, Kropholler, Linnell and Moody generalized these results to the case of torsion-free solvable and solvable-by-finite groups. Earlier (1965) work of Michel Lazard, whose importance was not appreciated by the specialists in the field for about 20 years, had dealt with the case where K is the ring of p-adic integers and G is the pth congruence subgroup of .
Spectrum of an integral domain
Zero divisors have a topological interpretation, at least in the case of commutative rings: a ring R is an integral domain if and only if it is reduced and its spectrum Spec R is an irreducible topological space. The first property is often considered to encode some infinitesimal information, whereas the second one is more geometric.
An example: the ring , where k is a field, is not a domain, since the images of x and y in this ring are zero divisors. Geometrically, this corresponds to the fact that the spectrum of this ring, which is the union of the lines and , is not irreducible. Indeed, these two lines are its irreducible components.
See also
Zero divisor
Zero-product property
Divisor (ring theory)
Integral domain
Notes
References
Ring theory
Algebraic structures | Domain (ring theory) | [
"Mathematics"
] | 931 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
487,641 | https://en.wikipedia.org/wiki/Sympatric%20speciation | In evolutionary biology, sympatric speciation is the evolution of a new species from a surviving ancestral species while both continue to inhabit the same geographic region. In evolutionary biology and biogeography, sympatric and sympatry are terms referring to organisms whose ranges overlap so that they occur together at least in some places. If these organisms are closely related (e.g. sister species), such a distribution may be the result of sympatric speciation. Etymologically, sympatry is derived . The term was coined by Edward Bagnall Poulton in 1904, who explains the derivation.
Sympatric speciation is one of three traditional geographic modes of speciation. Allopatric speciation is the evolution of species caused by the geographic isolation of two or more populations of a species. In this case, divergence is facilitated by the absence of gene flow. Parapatric speciation is the evolution of geographically adjacent populations into distinct species. In this case, divergence occurs despite limited interbreeding where the two diverging groups come into contact. In sympatric speciation, there is no geographic constraint to interbreeding. These categories are special cases of a continuum from zero (sympatric) to complete (allopatric) spatial segregation of diverging groups.
In multicellular eukaryotic organisms, sympatric speciation is a plausible process that is known to occur, but the frequency with which it occurs is not known.
In bacteria, however, the analogous process (defined as "the origin of new bacterial species that occupy definable ecological niches") might be more common because bacteria are less constrained by the homogenizing effects of sexual reproduction and are prone to comparatively dramatic and rapid genetic change through horizontal gene transfer.
Evidence
Sympatric speciation events are quite common in plants, which are prone to acquiring multiple homologous sets of chromosomes, resulting in polyploidy. The polyploid offspring occupy the same environment as the parent plants (hence sympatry), but are reproductively isolated.
A number of models have been proposed for alternative modes of sympatric speciation. The most popular, which invokes the disruptive selection model, was first put forward by John Maynard Smith in 1966. Maynard Smith suggested that homozygous individuals may, under particular environmental conditions, have a greater fitness than those with alleles heterozygous for a certain trait. Under the mechanism of natural selection, therefore, homozygosity would be favoured over heterozygosity, eventually leading to speciation. Sympatric divergence could also result from the sexual conflict.
Disruption may also occur in multiple-gene traits. The medium ground finch (Geospiza fortis) is showing gene pool divergence in a population on Santa Cruz Island. Beak morphology conforms to two different size ideals, while intermediate individuals are selected against. Some characteristics (termed magic traits) such as beak morphology may drive speciation because they also affect mating signals. In this case, different beak phenotypes may result in different bird calls, providing a barrier to exchange between the gene pools.
A somewhat analogous system has been reported in horseshoe bats, in which echolocation call frequency appears to be a magic trait. In these bats, the constant frequency component of the call not only determines prey size but may also function in aspects of social communication. Work from one species, the large-eared horseshoe bat (Rhinolophus philippinensis), shows that abrupt changes in call frequency among sympatric morphs is correlated with reproductive isolation. A further well-studied circumstance of sympatric speciation is when insects feed on more than one species of host plant. In this case insects become specialized as they struggle to overcome the various plants' defense mechanisms. (Drès and Mallet, 2002)
Rhagoletis pomonella, the apple maggot, may be currently undergoing sympatric or, more precisely, heteropatric (see heteropatry) speciation. The apple feeding race of this species appears to have spontaneously emerged from the hawthorn feeding race in the 1800–1850 AD time frame, after apples were first introduced into North America. The apple feeding race does not now normally feed on hawthorns, and the hawthorn feeding race does not now normally feed on apples. This may be an early step towards the emergence of a new species.
Some parasitic ants may have evolved via sympatric speciation. Isolated and relatively homogeneous habitats such as crater lakes and islands are among the best geographical settings in which to demonstrate sympatric speciation. For example, Nicaragua crater lake cichlid fishes include nine described species and dozens of undescribed species that have evolved by sympatric speciation. Monostroma latissimum, a marine green algae, also shows sympatric speciation in southwest Japanese islands. Although panmictic, the molecular phylogenetics using nuclear introns revealed staggering diversification of population.
African cichlids also offer some evidence for sympatric speciation. They show a large amount of diversity in the African Great Lakes. Many studies point to sexual selection as a way of maintaining reproductive isolation. Female choice with regards to male coloration is one of the more studied modes of sexual selection in African cichlids. Female choice is present in cichlids because the female does much of the work in raising the offspring, while the male has little energy input in the offspring. She exerts sensory bias when picking males by choosing those that have colors similar to her or those that are the most colorful. This helps maintain sympatric speciation within the lakes. Cichlids also use acoustic reproductive communication. The male cichlid quivers as a ritualistic display for the female which produces a certain number of pulses and pulse period. Female choice for good genes and sensory bias is one of the deciding factors in this case, selecting for calls that are within her species and that give the best fitness advantage to increase the survivability of the offspring. Male-male competition is a form of intrasexual selection and also has an effect on speciation in African cichlids. Ritualistic fighting among males establishes which males are going to be more successful in mating. This is important in sympatric speciation because species with similar males may be competing for the same females. There may be a fitness advantage for one phenotype that could allow one species to invade another. Studies show this effect in species that are genetically similar, have the capability to interbreed, and show phenotypic color variation. Ecological character displacement is another means for sympatric speciation. Within each lake there are different niches that a species could occupy. For example, different diets and depth of the water could help to maintain isolation between species in the same lake.
Allochrony offers some empirical evidence that sympatric speciation has taken place, as many examples exist of recently diverged (sister taxa) allochronic species. A case of ongoing sympatric divergence due to allochrony might be found in the marine insect Clunio marinus.
A rare example of sympatric speciation in animals is the divergence of "resident" and "transient" orca forms in the northeast Pacific. Resident and transient orcas inhabit the same waters, but avoid each other and do not interbreed. The two forms hunt different prey species and have different diets, vocal behaviour, and social structures. Some divergences between species could also result from contrasts in microhabitats. A population bottleneck occurred around 200,000 years ago greatly reducing the population size at the time as well as the variance of genes which allowed several ecotypes to emerge afterwards.
The European polecat (Mustela putorius) exhibited a rare dark phenotype similar to the European mink (Mustela lutreola) phenotype, which is directly influenced by peculiarities of forest brooks.
Controversy
For some time it was difficult to prove that sympatric speciation was possible, because it was impossible to observe it happening. It was believed by many, and championed by Ernst Mayr, that the theory of evolution by natural selection could not explain how two species could emerge from one if the subspecies were able to interbreed. Since Mayr's heyday in the 1940s and 50s, mechanisms have been proposed that explain how speciation might occur in the face of interbreeding, also known as gene flow. And even more recently concrete examples of sympatric divergence have been empirically studied. The debate now turns to how often sympatric speciation may actually occur in nature and how much of life's diversity it may be responsible for.
History
The German evolutionary biologist Ernst Mayr argued in the 1940s that speciation cannot occur without geographic, and thus reproductive, isolation. He stated that gene flow is the inevitable result of sympatry, which is known to squelch genetic differentiation between populations. Thus, a physical barrier must be present, he believed, at least temporarily, in order for a new biological species to arise. This hypothesis is the source of much controversy around the possibility of sympatric speciation. Mayr's hypothesis was popular and consequently quite influential, but is now widely disputed.
The first to propose what is now the most pervasive hypothesis on how sympatric speciation may occur was John Maynard Smith, in 1966. He came up with the idea of disruptive selection. He figured that if two ecological niches are occupied by a single species, diverging selection between the two niches could eventually cause reproductive isolation. By adapting to have the highest possible fitness in the distinct niches, two species may emerge from one even if they remain in the same area, and even if they are mating randomly.
Defining sympatry
Investigating the possibility of sympatric speciation requires a definition thereof, especially in the 21st century, when mathematical modeling is used to investigate or to predict evolutionary phenomena. Much of the controversy concerning sympatric speciation may lie solely on an argument over what sympatric divergence actually is. The use of different definitions by researchers is a great impediment to empirical progress on the matter. The dichotomy between sympatric and allopatric speciation is no longer accepted by the scientific community. It is more useful to think of a continuum, on which there are limitless levels of geographic and reproductive overlap between species. On one extreme is allopatry, in which the overlap is zero (no gene flow), and on the other extreme is sympatry, in which the ranges overlap completely (maximal gene flow).
The varying definitions of sympatric speciation fall generally into two categories: definitions based on biogeography, or on population genetics. As a strictly geographical concept, sympatric speciation is defined as one species diverging into two while the ranges of both nascent species overlap entirely – this definition is not specific enough about the original population to be useful in modeling.
Definitions based on population genetics are not necessarily spatial or geographical in nature, and can sometimes be more restrictive. These definitions deal with the demographics of a population, including allele frequencies, selection, population size, the probability of gene flow based on sex ratio, life cycles, etc. The main discrepancy between the two types of definitions tends to be the necessity for "panmixia". Population genetics definitions of sympatry require that mating be dispersed randomly – or that it be equally likely for an individual to mate with either subspecies, in one area as another, or on a new host as a nascent one: this is also known as panmixia. Population genetics definitions, also known as non-spatial definitions, thus require the real possibility for random mating, and do not always agree with spatial definitions on what is and what is not sympatry.
For example, micro-allopatry, also known as macro-sympatry, is a condition where there are two populations whose ranges overlap completely, but contact between the species is prevented because they occupy completely different ecological niches (such as diurnal vs. nocturnal). This can often be caused by host-specific parasitism, which causes dispersal to look like a mosaic across the landscape. Micro-allopatry is included as sympatry according to spatial definitions, but, as it does not satisfy panmixia, it is not considered sympatry according to population genetics definitions.
Mallet et al. (2002) claims that the new non-spatial definition is lacking in an ability to settle the debate about whether sympatric speciation regularly occurs in nature. They suggest using a spatial definition, but one that includes the role of dispersal, also known as cruising range, so as to represent more accurately the possibility for gene flow. They assert that this definition should be useful in modeling. They also state that under this definition, sympatric speciation seems plausible.
Current state of the controversy
Evolutionary theory as well as mathematical models have predicted some plausible mechanisms for the divergence of species without a physical barrier. In addition there have now been several studies that have identified speciation that has occurred, or is occurring with gene flow (see section above: evidence). Molecular studies have been able to show that, in some cases where there is no chance for allopatry, species continue to diverge. One such example is a pair of species of isolated desert palms. Two distinct, but closely related species exist on the same island, but they occupy two distinct soil types found on the island, each with a drastically different pH balance. Because they are palms they send pollen through the air they could freely interbreed, except that speciation has already occurred, so that they do not produce viable hybrids. This is hard evidence for the fact that, in at least some cases, fully sympatric species really do experience diverging selection due to competition, in this case for a spot in the soil.
This, and the other few concrete examples that have been found, are just that; they're few, so they tell us little about how often sympatry actually results in speciation in a more typical context. The burden now lies on providing evidence for sympatric divergence occurring in non-isolated habitats. It is not known how much of the earth's diversity it could be responsible for. Some still say that panmixia should slow divergence, and thus sympatric speciation should be possible but rare (1). Meanwhile, others claim that much of the earth's diversity could be due to speciation without geographic isolation. The difficulty in supporting a sympatric speciation hypothesis has always been that an allopatric scenario could always be invented, and those can be hard to rule out – but with modern molecular genetic techniques can be used to support the theory.
In 2015 Cichlid fish from a tiny volcanic crater lake in Africa were observed in the act of sympatric speciation using DNA sequencing methods. A study found a complex combination of ecological separation and mate choice preference had allowed two ecomorphs to genetically separate even in the presence of some genetic exchange.
Heteropatric speciation
Heteropatric speciation is a special case of sympatric speciation that occurs when different ecotypes or races of the same species geographically coexist but exploit different niches in the same patchy or heterogeneous environment. It is thus is a refinement of sympatric speciation, with a behavioral, rather than geographical barrier to the flow of genes among diverging groups within a population. Behavioral separation as a mechanism for promoting sympatric speciation in a heterogeneous (or patchwork landscape) was highlighted in John Maynard Smith's seminal paper on sympatric speciation. In recognition of the importance of this behavioral versus geographic distinction, Wayne Getz and Veijo Kaitala introduced the term heteropatry in their extension of Maynard Smiths' analysis of conditions that facilitate sympatric speciation.
Although some evolutionary biologists still regard sympatric speciation as highly contentious, both theoretical and empirical studies support it as a likely explanation of the diversity of life in particular ecosystems. Arguments implicate competition and niche separation of sympatric ecological variants that evolve through assortative mating into separate races and then species. Assortative mating most easily occurs if mating is linked to niche preference, as occurs in the apple maggot Rhagoletis pomonella, where individual flies from different races use volatile odors to discriminate between hawthorn and apple and look for mates on natal fruit. The term heteropatry semantically resolves the issue of sympatric speciation by reducing it to a scaling issue in terms of the way the landscape is used by individuals versus populations. From a population perspective, the process looks sympatric, but from an individual's perspective, the process looks allopatric, once the time spent flying over or moving quickly through intervening non-preferred niches is taken into account.
See also
Adaptive radiation
Cladistics
Ecotype
History of speciation
Hybrid speciation
Phylogenetics
Polymorphism (biology)
Polyploidy
Reinforcement
Laboratory experiments of speciation
Taxonomy
References
External links
Berkeley evolution 101
Ecology
Evolutionary biology
Speciation
Taxonomy (biology) | Sympatric speciation | [
"Biology"
] | 3,604 | [
"Evolutionary biology",
"Evolutionary processes",
"Speciation",
"Ecology",
"Taxonomy (biology)"
] |
487,748 | https://en.wikipedia.org/wiki/Communication%20source | A source or sender is one of the basic concepts of communication and information processing. Sources are objects which encode message data and transmit the information, via a channel, to one or more observers (or receivers).
In the strictest sense of the word, particularly in information theory, a source is a process that generates message data that one would like to communicate, or reproduce as exactly as possible elsewhere in space or time. A source may be modelled as memoryless, ergodic, stationary, or stochastic, in order of increasing generality.
Communication Source combines Communication and Mass Media Complete and Communication Abstracts to provide full-text access to more than 700 journals, and indexing and abstracting for more than 1,000 core journals. Coverage dating goes back to 1900.
Content is derived from academic journals, conference papers, conference proceedings, trade publications, magazines and periodicals.
A transmitter can be either a device, for example, an antenna, or a human transmitter, for example, a speaker. The word "transmitter" derives from an emitter, that is to say, that emits using the Hertzian waves.
In sending mail it also refers to the person or organization that sends a letter and whose address is written on the envelope of the letter.
In finance, an issuer can be, for example, the bank system of elements.
In education, an issuer is any person or thing that gives knowledge to the student, for example, the professor.
For communication to be effective, the sender and receiver must share the same code. In ordinary communication, the sender and receiver roles are usually interchangeable.
Depending on the language's functions, the issuer fulfills the expressive or emotional function, in which feelings, emotions, and opinions are manifested, such as The way is dangerous.
In economy
In the economy, the issuer is a legal entity, foundation, company, individual firm, national or foreign governments, investment companies or others that develop, register and then trade commercial securities to finance their operations. The issuers are legally responsible for the issues in question and for reporting the financial conditions, materials developed and whatever their operational activities required by the regulations within their jurisdictions.
See also
CALM M5
References
Source
Information theory | Communication source | [
"Mathematics",
"Technology",
"Engineering"
] | 455 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
23,936,022 | https://en.wikipedia.org/wiki/Infrastructure%20%28number%20theory%29 | In mathematics, an infrastructure is a group-like structure appearing in global fields.
Historic development
In 1972, D. Shanks first discovered the infrastructure of a real quadratic number field and applied his baby-step giant-step algorithm to compute the regulator of such a field in binary operations (for every ), where is the discriminant of the quadratic field; previous methods required binary operations. Ten years later, H. W. Lenstra published a mathematical framework describing the infrastructure of a real quadratic number field in terms of "circular groups". It was also described by R. Schoof and H. C. Williams, and later extended by H. C. Williams, G. W. Dueck and B. K. Schmid to certain cubic number fields of unit rank one and by J. Buchmann and H. C. Williams to all number fields of unit rank one. In his habilitation thesis, J. Buchmann presented a baby-step giant-step algorithm to compute the regulator of a number field of arbitrary unit rank. The first description of infrastructures in number fields of arbitrary unit rank was given by R. Schoof using Arakelov divisors in 2008.
The infrastructure was also described for other global fields, namely for algebraic function fields over finite fields. This was done first by A. Stein and H. G. Zimmer in the case of real hyperelliptic function fields. It was extended to certain cubic function fields of unit rank one by Renate Scheidler and A. Stein. In 1999, S. Paulus and H.-G. Rück related the infrastructure of a real quadratic function field to the divisor class group. This connection can be generalized to arbitrary function fields and, combining with R. Schoof's results, to all global fields.
One-dimensional case
Abstract definition
A one-dimensional (abstract) infrastructure consists of a real number , a finite set together with an injective map . The map is often called the distance map.
By interpreting as a circle of circumference and by identifying with , one can see a one-dimensional infrastructure as a circle with a finite set of points on it.
Baby steps
A baby step is a unary operation on a one-dimensional infrastructure . Visualizing the infrastructure as a circle, a baby step assigns each point of the next one. Formally, one can define this by assigning to the real number ; then, one can define .
Giant steps and reduction maps
Observing that is naturally an abelian group, one can consider the sum for . In general, this is not an element of . But instead, one can take an element of which lies nearby. To formalize this concept, assume that there is a map ; then, one can define to obtain a binary operation , called the giant step operation. Note that this operation is in general not associative.
The main difficulty is how to choose the map . Assuming that one wants to have the condition , a range of possibilities remain. One possible choice is given as follows: for , define ; then one can define . This choice, seeming somewhat arbitrary, appears in a natural way when one tries to obtain infrastructures from global fields. Other choices are possible as well, for example choosing an element such that is minimal (here, is stands for , as is of the form ); one possible construction in the case of real quadratic hyperelliptic function fields is given by S. D. Galbraith, M. Harrison and D. J. Mireles Morales.
Relation to real quadratic fields
D. Shanks observed the infrastructure in real quadratic number fields when he was looking at cycles of reduced binary quadratic forms. Note that there is a close relation between reducing binary quadratic forms and continued fraction expansion; one step in the continued fraction expansion of a certain quadratic irrationality gives a unary operation on the set of reduced forms, which cycles through all reduced forms in one equivalence class. Arranging all these reduced forms in a cycle, Shanks noticed that one can quickly jump to reduced forms further away from the beginning of the circle by composing two such forms and reducing the result. He called this binary operation on the set of reduced forms a giant step, and the operation to go to the next reduced form in the cycle a baby step.
Relation to
The set has a natural group operation and the giant step operation is defined in terms of it. Hence, it makes sense to compare the arithmetic in the infrastructure to the arithmetic in . It turns out that the group operation of can be described using giant steps and baby steps, by representing elements of by elements of together with a relatively small real number; this has been first described by D. Hühnlein and S. Paulus and by M. J. Jacobson, Jr., R. Scheidler and H. C. Williams in the case of infrastructures obtained from real quadratic number fields. They used floating point numbers to represent the real numbers, and called these representations CRIAD-representations resp. -representations. More generally, one can define a similar concept for all one-dimensional infrastructures; these are sometimes called -representations.
A set of -representations is a subset of such that the map is a bijection and that for every . If is a reduction map, is a set of -representations; conversely, if is a set of -representations, one can obtain a reduction map by setting , where is the projection on $X$. Hence, sets of -representations and reduction maps are in a one-to-one correspondence.
Using the bijection , one can pull over the group operation on to , hence turning into an abelian group by , . In certain cases, this group operation can be explicitly described without using and .
In case one uses the reduction map , one obtains . Given , one can consider with and ; this is in general no element of , but one can reduce it as follows: one computes and ; in case the latter is not negative, one replaces with and continues. If the value was negative, one has that and that , i.e. .
References
Abstract algebra
Algebraic structures
Algebraic number theory
Field (mathematics) | Infrastructure (number theory) | [
"Mathematics"
] | 1,270 | [
"Mathematical structures",
"Algebra",
"Mathematical objects",
"Number theory",
"Algebraic structures",
"Algebraic number theory",
"Abstract algebra"
] |
23,937,596 | https://en.wikipedia.org/wiki/Fixed%20bill | Fixed bill refers to an energy pricing program in which a consumer pays a predetermined amount for their total energy consumption for a given period. The price is independent of the amount of energy the customer uses or the unit price of the energy. Energy companies can offer this type of pricing by hedging the risks of fluctuating demand using weather derivatives.
History
The ability to provide fixed bill energy contracts in the US grew out of the deregulation of the energy industry in the 1990s. An early pioneer in this field was the Equitable Gas Company. It proposed a one-year, fixed-bill natural gas contract to Allegheny County public schools in 1995.
The two inventors of the Equitable Gas product, Bernard Bilski and Rand Warsaw, then left Equitable and formed their own company, WeatherWise USA. Rand Warsaw is now CEO of the company. They have further developed the product and now offer a variety of fixed bill plans to energy companies under license. The energy companies, in turn, offer the plans to their customers.
Patents
On April 16, 1996, Equitable Gas filed a patent application on their process. The inventors were Equitable employees, Bernard Bilski and Rand Warsaw. The title was “Energy Risk Management Method”. It had a description of a method for hedging the weather-related risk that contributes to fluctuating demand in a fixed bill pricing scheme. It has been viewed as a “pure” Business method patent and was rejected by the USPTO examiner, the USPTO board of appeals, the United States Court of Appeals for the Federal Circuit (case In re Bilski) and the Supreme Court of the United States (case Bilski v. Kappos).
Numerous other patent applications have been filed with several having issued. The patents cover different variations of fixed bill offerings.
Commercial products
The following companies design fixed bill products and license them to distributors, such as utilities:
WeatherWise USA
Christensen Associates
The following energy companies offer fixed bill programs directly to consumers:
Nicor Gas
Wisconsin Public Service Corporation
Duke Energy
Alliant Energy
WEC Energy Group
Levelized payment
A Fixed Bill plan is different from a more traditional Levelized Payment plan. In a Levelized Payment plan, a consumer is billed an equal amount per month for a year based on their prior energy use. At the end of the year, however, the consumer will be billed for excess energy they may have used, or get a refund if their actual energy use was less than projected.
In a Fixed Bill plan, what a consumer pays is independent of what they use.
Controversy
Fixed bill pricing programs have been investigated by States Attorneys General when participants' bills have been higher than nonparticipants' bills. In 2007, for example, Minnesota shut down a fixed bill program run by Xcel Energy and CenterPoint Energy when most participants paid higher than average bills for four out of five years.
See also
Swap (finance)
Weather derivatives
Electricity meter
Hedge (finance)
References
External links
Christensen Associates Fixed Bill page
WeatherWise home page
Energy economics
Electric power
Derivatives (finance)
Financial risk | Fixed bill | [
"Physics",
"Engineering",
"Environmental_science"
] | 628 | [
"Physical quantities",
"Energy economics",
"Power (physics)",
"Electric power",
"Electrical engineering",
"Environmental social science"
] |
23,937,630 | https://en.wikipedia.org/wiki/Intron-mediated%20enhancement | Intron-mediated enhancement (IME) is the ability of an intron sequence to enhance the expression of a gene containing that intron. In particular, the intron must be present in the transcribed region of the gene for enhancement to occur, differentiating IME from the action of typical transcriptional enhancers. Descriptions of this phenomenon were first published in cultured maize cells in 1987, and the term "intron-mediated enhancement" was subsequently coined in 1990. A number of publications have demonstrated that this phenomenon is conserved across eukaryotes, including humans, mice, Arabidopsis, rice, and C. elegans. However, the mechanism(s) by which IME works are still not completely understood.
When testing to see whether any given intron enhances the expression of a gene, it is typical to compare the expression of two constructs, one containing the intron and one without it, and to express the difference between the two results as a "fold increase" in enhancement. Further experiments can specifically point to IME as the cause of expression enhancement - one of the most common is to move the intron upstream of the transcription start site, removing it from the transcript. If the intron can no longer enhance expression, then inclusion of the intron in the transcript is important, and the intron probably causes IME.
Not all introns enhance gene expression, but those that do can enhance expression between 2– and >1,000–fold relative to an intronless control. In Arabidopsis and other plant species, the IMEter has been developed to calculate the likelihood that an intron sequence will enhance gene expression. It does this by calculating a score based on the patterns of nucleotide sequences within the target sequence. The position of an intron within the transcript is also important - the closer an intron is to the start (5' end) of a transcript, the greater its enhancement of gene expression.
References
Gene expression | Intron-mediated enhancement | [
"Chemistry",
"Biology"
] | 402 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
23,941,067 | https://en.wikipedia.org/wiki/Differential%20algebraic%20geometry | Differential algebraic geometry is an area of differential algebra that adapts concepts and methods from algebraic geometry and applies them to systems of differential equations, especially algebraic differential equations.
Another way of generalizing ideas from algebraic geometry is diffiety theory.
References
Differential algebraic geometry (three parts in one pdf), part of the Kolchin Seminar in Differential Algebra
, Henri Gillet (2000), Differential algebra - A Scheme Theory Approach, Differential algebra and related topics: proceedings of the International Workshop, Newark Campus of Rutgers, The State University of New Jersey, 2-3 November 2000, Editors Li Guo, William F. Keigher, World Scientific,
Differential algebra | Differential algebraic geometry | [
"Mathematics"
] | 133 | [
"Differential algebra",
"Fields of abstract algebra"
] |
23,941,767 | https://en.wikipedia.org/wiki/Prosolvable%20group | In mathematics, more precisely in algebra, a prosolvable group (less common: prosoluble group) is a group that is isomorphic to the inverse limit of an inverse system of solvable groups. Equivalently, a group is called prosolvable, if, viewed as a topological group, every open neighborhood of the identity contains a normal subgroup whose corresponding quotient group is a solvable group.
Examples
Let p be a prime, and denote the field of p-adic numbers, as usual, by . Then the Galois group , where denotes the algebraic closure of , is prosolvable. This follows from the fact that, for any finite Galois extension of , the Galois group can be written as semidirect product , with cyclic of order for some , cyclic of order dividing , and of -power order. Therefore, is solvable.
See also
Galois theory
References
Mathematical structures
Group theory
Number theory
Topology
Properties of groups
Topological groups | Prosolvable group | [
"Physics",
"Mathematics"
] | 198 | [
"Discrete mathematics",
"Mathematical structures",
"Mathematical objects",
"Space (mathematics)",
"Properties of groups",
"Group theory",
"Topological spaces",
"Fields of abstract algebra",
"Topology",
"Space",
"Algebraic structures",
"Geometry",
"Topological groups",
"Spacetime",
"Numbe... |
25,264,092 | https://en.wikipedia.org/wiki/Slow-growing%20hierarchy | In computability theory, computational complexity theory and proof theory, the slow-growing hierarchy is an ordinal-indexed family of slowly increasing functions gα: N → N (where N is the set of natural numbers, {0, 1, ...}). It contrasts with the fast-growing hierarchy.
Definition
Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The slow-growing hierarchy of functions gα: N → N, for α < μ, is then defined as follows:
for limit ordinal α.
Here α[n] denotes the nth element of the fundamental sequence assigned to the limit ordinal α.
The article on the Fast-growing hierarchy describes a standardized choice for fundamental sequence for all α < ε0.
Example
Relation to fast-growing hierarchy
The slow-growing hierarchy grows much more slowly than the fast-growing hierarchy. Even gε0 is only equivalent to f3 and gα only attains the growth of fε0 (the first function that Peano arithmetic cannot prove total in the hierarchy) when α is the Bachmann–Howard ordinal.
However, Girard proved that the slow-growing hierarchy eventually catches up with the fast-growing one. Specifically, that there exists an ordinal α such that for all integers n
gα(n) < fα(n) < gα(n + 1)
where fα are the functions in the fast-growing hierarchy. He further showed that the first α this holds for is the ordinal of the theory ID<ω of arbitrary finite iterations of an inductive definition. However, for the assignment of fundamental sequences found in the first match up occurs at the level ε0. For Buchholz style tree ordinals it could be shown that the first match up even occurs at .
Extensions of the result proved to considerably larger ordinals show that there are very few ordinals below the ordinal of transfinitely iterated -comprehension where the slow- and fast-growing hierarchy match up.
The slow-growing hierarchy depends extremely sensitively on the choice of the underlying fundamental sequences.
References
See especially "A Glimpse at Hierarchies of Fast and Slow Growing Functions", pp. 59–64 of linked version.
Notes
Computability theory
Proof theory
Hierarchy of functions | Slow-growing hierarchy | [
"Mathematics"
] | 492 | [
"Computability theory",
"Mathematical logic",
"Proof theory"
] |
25,264,191 | https://en.wikipedia.org/wiki/Hardy%20hierarchy | In computability theory, computational complexity theory and proof theory, the Hardy hierarchy, named after G. H. Hardy, is a hierarchy of sets of numerical functions generated from an ordinal-indexed family of functions hα: N → N (where N is the set of natural numbers, {0, 1, ...}) called Hardy functions. It is related to the fast-growing hierarchy and slow-growing hierarchy.
Hardy hierarchy is introduced by Stanley S. Wainer in 1972, but the idea of its definition comes from Hardy's 1904 paper, in which Hardy exhibits a set of reals with cardinality .
Definition
Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ. The Hardy functions hα: N → N, for α < μ, is then defined as follows:
if α is a limit ordinal.
Here α[n] denotes the nth element of the fundamental sequence assigned to the limit ordinal α. A standardized choice of fundamental sequence for all α ≤ ε0 is described in the article on the fast-growing hierarchy.
The Hardy hierarchy is a family of numerical functions. For each ordinal , a set is defined as the smallest class of functions containing , zero, successor and projection functions, and closed under limited primitive recursion and limited substitution (similar to Grzegorczyk hierarchy).
defines a modified Hardy hierarchy of functions by using the standard fundamental sequences, but with α[n+1] (instead of α[n]) in the third line of the above definition.
Relation to fast-growing hierarchy
The Wainer hierarchy of functions fα and the Hardy hierarchy of functions Hα are related by fα = Hωα for all α < ε0. Thus, for any α < ε0, Hα grows much more slowly than does fα. However, the Hardy hierarchy "catches up" to the Wainer hierarchy at α = ε0, such that fε0 and Hε0 have the same growth rate, in the sense that fε0(n-1) ≤ Hε0(n) ≤ fε0(n+1) for all n ≥ 1.
Notes
References
. (In particular Section 12, pp. 59–64, "A Glimpse at Hierarchies of Fast and Slow Growing Functions".)
.
Computability theory
Proof theory
Hierarchy of functions | Hardy hierarchy | [
"Mathematics"
] | 501 | [
"Computability theory",
"Mathematical logic",
"Proof theory"
] |
25,274,578 | https://en.wikipedia.org/wiki/Integrity%20engineering%20audit | An Integrity Engineering Audit is carried out within an Integrity engineering function so as to ensure compliance with international, national and company specific standards and regulations.
It is carried out in order to prove that the system is compliant, transparent, effective and efficient. API Recommended Practice 580, Risk-Based Inspection (see American Petroleum Institute) outlines such an audit as part of a Risk Based Inspection program. It checks that the most efficient and cost effective implementation of inspections and integrity management programs are being carried out. It ensures that the integrity of plant facilities including all onshore and offshore structures and pipelines, stationary equipment, piping systems are being correctly addressed. It checks and ensures that the Integrity Engineer has identified, investigated and assessed all deterioration/corrosion as well as timely maintenance of the affected facilities. It audits the Inspection and Corrosion Control Policy and Risk Based Inspection (RBI) methods which manage the integrity and checks that the optimum inspection frequency, maintenance cost and plant availability are being met. It may be approached under a generic framework such as ISO 19011 on the basis of a technical audit without formal documentation, but with a regulatory or statutory criteria.
References
1 Implementation of Asset Integrity Management System Muhammad Abduh PetroEnergy Magazine April – May 2008 Edition http://abduh137.wordpress.com/2008/05/04/aims/
2 Offshore Information Sheet 4/2006 Offshore Installations (Safety Case) Regulations 2005 Regulation 13 Thorough Review of a Safety Case (Revised and reissued July 2008)
3 Structural integrity management framework for fixed jacket structures Prepared by Atkins Limited for the Health and Safety Executive 2009 Research Report RR684
4 Audit of Integrity Management Systems http://www.advantica.biz/Default.aspx?page=639
5 Plant Integrity Management Services Germanischer Lloyd
https://web.archive.org/web/20110807181637/http://www.gl-nobledenton.com/assets/downloads/13.Plant_Integrity_Management_Services_external.pdf
6 Guidelines for Auditing Process Safety Management Systems Ccps, Center for Chemical Process Safety (CCPS) - Technology & Engineering - 2011 - 250 pages
Maintenance | Integrity engineering audit | [
"Engineering"
] | 448 | [
"Maintenance",
"Mechanical engineering"
] |
1,897,245 | https://en.wikipedia.org/wiki/Coating | A coating is a covering that is applied to the surface of an object, or substrate. The purpose of applying the coating may be decorative, functional, or both. Coatings may be applied as liquids, gases or solids e.g. powder coatings.
Paints and lacquers are coatings that mostly have dual uses, which are protecting the substrate and being decorative, although some artists paints are only for decoration, and the paint on large industrial pipes is for identification (e.g. blue for process water, red for fire-fighting control) in addition to preventing corrosion. Along with corrosion resistance, functional coatings may also be applied to change the surface properties of the substrate, such as adhesion, wettability, or wear resistance. In other cases the coating adds a completely new property, such as a magnetic response or electrical conductivity (as in semiconductor device fabrication, where the substrate is a wafer), and forms an essential part of the finished product.
A major consideration for most coating processes is controlling coating thickness. Methods of achieving this range from a simple brush to expensive precision machinery in the electronics industry. Limiting coating area is crucial in some applications, such as printing.
"Roll-to-roll" or "web-based" coating is the process of applying a thin film of functional material to a substrate on a roll, such as paper, fabric, film, foil, or sheet stock. This continuous process is highly efficient for producing large volumes of coated materials, which are essential in various industries including printing, packaging, and electronics. The technology allows for consistent high-quality application of the coating material over large surface areas, enhancing productivity and uniformity.
Applications
Coatings can be both decorative and have other functions. A pipe carrying water for a fire suppression system can be coated with a red (for identification) anticorrosion paint. Most coatings to some extent protect the substrate, such as maintenance coatings for metals and concrete. A decorative coating can offer a particular reflective property, such as high gloss, satin, matte, or flat appearance.
A major coating application is to protect metal from corrosion. Automotive coatings are used to enhance the appearance and durability of vehicles. These include primers, basecoats, and clearcoats, primarily applied with spray guns and electrostatically.
The body and underbody of automobiles receive some form of underbody coating. Such anticorrosion coatings may use graphene in combination with water-based epoxies.
Coatings are used to seal the surface of concrete, such as seamless polymer/resin flooring, bund wall/containment lining, waterproofing and damp proofing concrete walls, and bridge decks.
Most roof coatings are designed primarily for waterproofing, though sun reflection (to reduce heating and cooling) may also be a consideration. They tend to be elastomeric to allow for movement of the roof without cracking within the coating membrane.
Wood has been a key material in construction since ancient times, so its preservation by coating has received much attention. Efforts to improve the performance of wood coatings continue.
Coatings are used to alter tribological properties and wear characteristics. These include anti-friction, wear and scuffing resistance coatings for rolling-element bearings
Other
Other functions of coatings include:
Anti-fouling coatings
Anti-microbial coatings.
Anti-reflective coatings for example on spectacles.
Coatings that alter or have magnetic, electrical or electronic properties.
Flame retardant coatings. Flame-retardant materials and coatings are being developed that are phosphorus and bio-based. These include coatings with intumescent functionality.
Non-stick PTFE coated cooking pots/pans.
Optical coatings are available that alter optical properties of a material or object.
UV coatings
Analysis and characterization
Numerous destructive and non-destructive evaluation (NDE) methods exist for characterizing coatings. The most common destructive method is microscopy of a mounted cross-section of the coating and its substrate. The most common non-destructive techniques include ultrasonic thickness measurement, X-ray fluorescence (XRF), X-Ray diffraction (XRD), photothermal coating thickness measurement and micro hardness indentation. X-ray photoelectron spectroscopy (XPS) is also a classical characterization method to investigate the chemical composition of the nanometer thick surface layer of a material. Scanning electron microscopy coupled with energy dispersive X-ray spectrometry (SEM-EDX, or SEM-EDS) allows to visualize the surface texture and to probe its elementary chemical composition. Other characterization methods include transmission electron microscopy (TEM), atomic force microscopy (AFM), scanning tunneling microscope (STM), and Rutherford backscattering spectrometry (RBS). Various methods of Chromatography are also used, as well as thermogravimetric analysis.
Formulation
The formulation of a coating depends primarily on the function required of the coating and also on aesthetics required such as color and gloss. The four primary ingredients are the resin (or binder), solvent which may be water (or solventless), pigment(s) and additives. Research is ongoing to remove heavy metals from coating formulations completely.
For example, on the basis of experimental and epidemiological evidence, it has been classified by the IARC (International Agency for Research on Cancer) as a human carcinogen by inhalation (class I) (ISPESL, 2008).
Processes
Coating processes may be classified as follows:
Vapor deposition
Chemical vapor deposition
Metalorganic vapour phase epitaxy
Electrostatic spray assisted vapour deposition (ESAVD)
Sherardizing
Some forms of Epitaxy
Molecular beam epitaxy
Physical vapor deposition
Cathodic arc deposition
Electron beam physical vapor deposition (EBPVD)
Ion plating
Ion beam assisted deposition (IBAD)
Magnetron sputtering
Pulsed laser deposition
Sputter deposition
Vacuum deposition
Vacuum evaporation, evaporation (deposition)
Pulsed electron deposition (PED)
Chemical and electrochemical techniques
Conversion coating
Autophoretic, the registered trade name of a proprietary series of auto-depositing coatings specifically for ferrous metal substrates
Anodising
Chromate conversion coating
Plasma electrolytic oxidation
Phosphate (coating)
Ion beam mixing
Pickled and oiled, a type of plate steel coating
Plating
Electroless plating
nickel plating coating using a different material to preserve mechanical properties
Electroplating
Spraying
Spray painting
High velocity oxygen fuel (HVOF)
Plasma spraying
Thermal spraying
Kinetic metallization (KM)
Plasma transferred wire arc thermal spraying
The common forms of Powder coating
Roll-to-roll coating processes
Common roll-to-roll coating processes include:
Air knife coating
Anilox coater
Flexo coater
Gap Coating
Knife-over-roll coating
Gravure coating
Hot melt coating- when the necessary coating viscosity is achieved by temperature rather than solution of the polymers etc. This method commonly implies slot-die coating above room temperature, but it also is possible to have hot-melt roller coating; hot-melt metering-rod coating, etc.
Immersion dip coating
Kiss coating
Metering rod (Meyer bar) coating
Roller coating
Forward roller coating
Reverse roll coating
Silk Screen coater
Rotary screen
Slot Die coating - Slot die coating was originally developed in the 1950s. Slot die coating has a low operational cost and is an easily scaled processing technique for depositing thin and uniform films rapidly, while minimizing material waste. Slot die coating technology is used to deposit a variety of liquid chemistries onto substrates of various materials such as glass, metal, and polymers by precisely metering the process fluid and dispensing it at a controlled rate while the coating die is precisely moved relative to the substrate. The complex inner geometry of conventional slot dies require machining or can be accomplished with 3-D printing.
Extrusion coating - generally high pressure, often high temperature, and with the web travelling much faster than the speed of the extruded polymer
Curtain coating- low viscosity, with the slot vertically above the web and a gap between slot-die and web.
Slide coating- bead coating with an angled slide between the slot-die and the bead. Commonly used for multilayer coating in the photographic industry.
Slot die bead coating- typically with the web backed by a roller and a very small gap between slot-die and web.
Tensioned-web slot-die coating- with no backing for the web.
Inkjet printing
Lithography
Flexography
Physical
Langmuir-Blodgett
Spin coating
Dip coating
See also
Adhesion Tester
Deposition
Electrostatic coating
Film coating drugs
Food coating
Formulations
Langmuir-Blodgett film
Nanoparticle deposition
Optically active additive, for inspection purposes after a coating operation
Paint
Paper coating
Plastic film
Polymer science
Printed electronics
Seal (mechanical)
Thermal barrier coating
Thermal cleaning
Thin-film deposition
Thermosetting polymer
Vitreous enamel
References
Further reading
Titanium and titanium alloys, edited by C. Leyens and M. Peters, Wiley-VCH, , table 6.2: overview of several coating systems and fabriction processes for titanium alloys and titanium aluminides (amended)
Coating Materials for Electronic Applications: Polymers, Processes, Reliability, Testing by James J. Licari; William Andrew Publishing, Elsevier,
High-Performance Organic Coatings, ed. AS Khanna, Elsevier BV, 2015,
Corrosion
Materials science
Printing | Coating | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,938 | [
"Applied and interdisciplinary physics",
"Metallurgy",
"Coatings",
"Materials science",
"Corrosion",
"Electrochemistry",
"nan",
"Materials degradation"
] |
1,899,305 | https://en.wikipedia.org/wiki/Boltzmann%20relation | In a plasma, the Boltzmann relation describes the number density of an isothermal charged particle fluid when the thermal and the electrostatic forces acting on the fluid have reached equilibrium.
In many situations, the electron density of a plasma is assumed to behave according to the Boltzmann relation, due to their small mass and high mobility.
Equation
If the local electrostatic potentials at two nearby locations are φ1 and φ2, the Boltzmann relation for the electrons takes the form:
where ne is the electron number density, Te is the temperature of the plasma, and kB is the Boltzmann constant.
Derivation
A simple derivation of the Boltzmann relation for the electrons can be obtained using the momentum fluid equation of the two-fluid model of plasma physics in absence of a magnetic field. When the electrons reach dynamic equilibrium, the inertial and the collisional terms of the momentum equations are zero, and the only terms left in the equation are the pressure and electric terms. For an isothermal fluid, the pressure force takes the form
while the electric term is
.
Integration leads to the expression given above.
In many problems of plasma physics, it is not useful to calculate the electric potential on the basis of the Poisson equation because the electron and ion densities are not known a priori, and if they were, because of quasineutrality the net charge density is the small difference of two large quantities, the electron and ion charge densities. If the electron density is known and the assumptions hold sufficiently well, the electric potential can be calculated simply from the Boltzmann relation.
Inaccurate situations
Discrepancies with the Boltzmann relation can occur, for example, when oscillations occur so fast that the electrons cannot find a new equilibrium (see e.g. plasma oscillations) or when the electrons are prevented from moving by a magnetic field (see e.g. lower hybrid oscillations).
References
Plasma physics_equations | Boltzmann relation | [
"Physics"
] | 401 | [
"Equations of physics",
"Plasma physics equations"
] |
1,900,416 | https://en.wikipedia.org/wiki/TomPaine.com | TomPaine.com was a website with news and opinion on United States politics from a progressive perspective, named after the political writer Thomas Paine. It featured a mixture of original articles and links to articles on other websites.
TomPaine.com was founded in 1999 by John Moyers as an independent, non-profit journal of opinion. The project became best known for its opinion advertisements — or "op ads," a term coined by Moyers — which ran almost weekly on the op-ed page of the New York Times, and also in the Weekly Standard, Roll Call, and other publications.
Between 1999 and 2003, Moyers conceived and wrote some 120 op ads. Some of those launched national controversies and were noted, quoted, cited and/or copycatted in The New York Times, Newsweek, Time, Reuters, the Associated Press, The International Herald Tribune, Der Spiegel and dozens of other publications and Web sites; on the CBS, NBC, and ABC evening newscasts; and on numerous cable news outlets. An op ad was reprinted in a college-level textbook as an example of effective mission-driven communications. Rolling Stone dubbed TomPaine.com a "cool irritant," calling its op ads "perhaps the media's most visible outlet for apple-cart-upsetting truths about glossed-over issues." In April 2001, Alternet.org named Mr. Moyers one of six "New Media Heroes.". PC Magazine called the website "a great example of what an online journal can be.". The Communication Workers of American and the Newspaper Guild awarded the 2003 Herbert Block Freedom Award to John Moyers and the staff of TomPaine.com for being "a consistent voice of reason and democratic discourse at a time of increased political attacks on civil liberties and a flattening of discourse in the mainstream media."
Moyers left TomPaine.com at the end of 2003 and TomPaine.com is now a project of the Institute for America's Future, a progressive thinktank.
References
External links
TomPaine.com
American political websites
Cultural depictions of Thomas Paine
Internet properties established in 1999
1999 establishments in the United States | TomPaine.com | [
"Technology"
] | 453 | [
"Computing stubs",
"World Wide Web stubs"
] |
1,901,903 | https://en.wikipedia.org/wiki/Trigonal%20bipyramidal%20molecular%20geometry | In chemistry, a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular bipyramid. This is one geometry for which the bond angles surrounding the central atom are not identical (see also pentagonal bipyramid), because there is no geometrical arrangement with five terminal atoms in equivalent positions. Examples of this molecular geometry are phosphorus pentafluoride (), and phosphorus pentachloride () in the gas phase.
Axial (or apical) and equatorial positions
The five atoms bonded to the central atom are not all equivalent, and two different types of position are defined. For phosphorus pentachloride as an example, the phosphorus atom shares a plane with three chlorine atoms at 120° angles to each other in equatorial positions, and two more chlorine atoms above and below the plane (axial or apical positions).
According to the VSEPR theory of molecular geometry, an axial position is more crowded because an axial atom has three neighboring equatorial atoms (on the same central atom) at a 90° bond angle, whereas an equatorial atom has only two neighboring axial atoms at a 90° bond angle. For molecules with five identical ligands, the axial bond lengths tend to be longer because the ligand atom cannot approach the central atom as closely. As examples, in PF5 the axial P−F bond length is 158 pm and the equatorial is 152 pm, and in PCl5 the axial and equatorial are 214 and 202 pm respectively.
In the mixed halide PF3Cl2 the chlorines occupy two of the equatorial positions, indicating that fluorine has a greater apicophilicity or tendency to occupy an axial position. In general ligand apicophilicity increases with electronegativity and also with pi-electron withdrawing ability, as in the sequence Cl < F < CN. Both factors decrease electron density in the bonding region near the central atom so that crowding in the axial position is less important.
Related geometries with lone pairs
The VSEPR theory also predicts that substitution of a ligand at a central atom by a lone pair of valence electrons leaves the general form of the electron arrangement unchanged with the lone pair now occupying one position. For molecules with five pairs of valence electrons including both bonding pairs and lone pairs, the electron pairs are still arranged in a trigonal bipyramid but one or more equatorial positions is not attached to a ligand atom so that the molecular geometry (for the nuclei only) is different.
The seesaw molecular geometry is found in sulfur tetrafluoride (SF4) with a central sulfur atom surrounded by four fluorine atoms occupying two axial and two equatorial positions, as well as one equatorial lone pair, corresponding to an AX4E molecule in the AXE notation. A T-shaped molecular geometry is found in chlorine trifluoride (ClF3), an AX3E2 molecule with fluorine atoms in two axial and one equatorial position, as well as two equatorial lone pairs. Finally, the triiodide ion () is also based upon a trigonal bipyramid, but the actual molecular geometry is linear with terminal iodine atoms in the two axial positions only and the three equatorial positions occupied by lone pairs of electrons (AX2E3); another example of this geometry is provided by xenon difluoride, XeF2.
Berry pseudorotation
Isomers with a trigonal bipyramidal geometry are able to interconvert through a process known as Berry pseudorotation. Pseudorotation is similar in concept to the movement of a conformational diastereomer, though no full revolutions are completed. In the process of pseudorotation, two equatorial ligands (both of which have a shorter bond length than the third) "shift" toward the molecule's axis, while the axial ligands simultaneously "shift" toward the equator, creating a constant cyclical movement. Pseudorotation is particularly notable in simple molecules such as phosphorus pentafluoride (PF5).
See also
AXE method
Molecular geometry
References
External links
Indiana University Molecular Structure Center
Interactive molecular examples for point groups
Molecular Modeling
Animated Trigonal Planar Visual
Stereochemistry
Molecular geometry | Trigonal bipyramidal molecular geometry | [
"Physics",
"Chemistry"
] | 879 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Space",
"nan",
"Spacetime",
"Matter"
] |
1,902,224 | https://en.wikipedia.org/wiki/Monoscope | A monoscope was a special form of video camera tube which displayed a single still video image. The image was built into the tube, hence the name. The tube resembled a small cathode-ray tube (CRT). Monoscopes were used beginning in the 1950s to generate TV test patterns and station logos. This type of test card generation system was technologically obsolete by the 1980s.
Design
The monoscope was similar in construction to a CRT, with an electron gun at one end and at the other, a metal target screen with an image formed on it. This was in the position where a CRT would have its phosphor-coated display screen. As the electron beam scanned the target, varying numbers of electrons were reflected from the different areas of the image. The reflected electrons were picked up by an internal electrode ring, producing a varying electrical signal which was amplified to become the video output of the tube.
This signal reproduced an accurate still image of the target, so the monoscope was used to produce still images such as test patterns and station logo cards. For example, the classic Indian Head test card as used by many television stations in North America, was often produced using a monoscope.
Usage
Monoscopes were available with a wide variety of standard patterns and messages, and could be ordered with a custom image such as a station logo. Monoscope "cameras" were widely used to produce test cards, station logos, special signals for test purposes and standard announcements like "Please stand by" and "normal service will be resumed....". They had many advantages over using a live camera pointed at a card; an expensive camera was not tied up, they were always ready, and were never misframed or out of focus. Indeed, monoscopes were often used to calibrate the live cameras, by comparing the monoscope image and the live camera image of the same test pattern.
Pointing an electronic camera at the same stationary monochrome caption for a long period of time could result in the image becoming burnt onto the camera tube's target — and even onto the phosphor of a monitor displaying it in extreme cases.
Monoscopes were used as character generators for text mode video rendering in computer displays for a short time in the 1960s. The monoscope declined in popularity after the 1960s due to its inability to generate a colour test card, and the development of solid state TV test pattern signal generators.
See also
Test card, the updated, coloured version of monoscopes.
References
External links
Monoscope tubes
The Museum of the Broadcast TV Camera, picture and description of the Marconi BD617B portable Monoscope camera (UK)
Picture and description of the RCA TK-1 Monoscope (US)
Indian Head - "as transmitted" picture
2F21 data sheet
Display technology
Television technology
Test cards
Vacuum tubes | Monoscope | [
"Physics",
"Technology",
"Engineering"
] | 581 | [
"Information and communications technology",
"Television technology",
"Vacuum tubes",
"Vacuum",
"Electronic engineering",
"Display technology",
"Matter"
] |
1,902,312 | https://en.wikipedia.org/wiki/Pathogen-associated%20molecular%20pattern | Pathogen-associated molecular patterns (PAMPs) are small molecular motifs conserved within a class of microbes, but not present in the host. They are recognized by toll-like receptors (TLRs) and other pattern recognition receptors (PRRs) in both plants and animals. This allows the innate immune system to recognize pathogens and thus, protect the host from infection.
Although the term "PAMP" is relatively new, the concept that molecules derived from microbes must be detected by receptors from multicellular organisms has been held for many decades, and references to an "endotoxin receptor" are found in much of the older literature. The recognition of PAMPs by the PRRs triggers activation of several signaling cascades in the host immune cells like the stimulation of interferons (IFNs) or other cytokines.
Common PAMPs
A vast array of different types of molecules can serve as PAMPs, including glycans and glycoconjugates. Flagellin is also another PAMP that is recognized via the constant domain, D1 by TLR5. Despite being a protein, its N- and C-terminal ends are highly conserved, due to its necessity for function of flagella. Nucleic acid variants normally associated with viruses, such as double-stranded RNA (dsRNA), are recognized by TLR3 and unmethylated CpG motifs are recognized by TLR9. The CpG motifs must be internalized in order to be recognized by TLR9. Viral glycoproteins, as seen in the viral-envelope, as well as fungal PAMPS on the cell surface or fungi are recognized by TLR2 and TLR4.
Gram-negative bacteria
Bacterial lipopolysaccharides (LPSs), also known as endotoxins, are found on the cell membranes of gram-negative bacteria, are considered to be the prototypical class of PAMPs. The lipid portion of LPS, lipid A, contains a diglycolamine backbone with multiple acyl chains. This is the conserved structural motif that is recognized by TLR4, particularly the TLR4-MD2 complex. Microbes have two main strategies in which they try to avoid the immune system, either by masking lipid A or directing their LPS towards an immunomodulatory receptor.
Peptidoglycan (PG) is also found within the membrane walls of gram-negative bacteria and is recognized by TLR2, which is usually in a heterodimer of with TLR1 or TLR6.
Gram-positive bacteria
Lipoteichoic acid (LTA) from gram-positive bacteria, bacterial lipoproteins (sBLP), a phenol soluble factor from Staphylococcus epidermidis, and a component of yeast walls called zymosan, are all recognized by a heterodimer of TLR2 and TLR1 or TLR6. However, LTAs result in a weaker pro-inflammatory response compared to lipopeptides, as they are only recognized by TLR2 instead of the heterodimer.
History
First introduced by Charles Janeway in 1989, PAMP was used to describe microbial components that would be considered foreign in a multicellular host. The term "PAMP" has been criticized on the grounds that most microbes, not only pathogens, express the molecules detected; the term microbe-associated molecular pattern (MAMP), has therefore been proposed. A virulence signal capable of binding to a pathogen receptor, in combination with a MAMP, has been proposed as one way to constitute a (pathogen-specific) PAMP. Plant immunology frequently treats the terms "PAMP" and "MAMP" interchangeably, considering their recognition to be the first step in plant immunity, PTI (PAMP-triggered immunity), a relatively weak immune response that occurs when the host plant does not also recognize pathogenic effectors that damage it or modulate its immune response.
In mycobacteria
Mycobacteria are intracellular bacteria which survive in host macrophages. The mycobacterial wall is composed of lipids and polysaccharides and also contains high amounts of mycolic acid. Purified cell wall components of mycobacteria activate mainly TLR2 and also TLR4. Lipomannan and lipoarabinomannan are strong immunomodulatory lipoglycans. TLR2 with association of TLR1 can recognize cell wall lipoprotein antigens from Mycobacterium tuberculosis, which also induce production of cytokines by macrophages. TLR9 can be activated by mycobacterial DNA.
See also
DAMP
Tissue remodeling
References
Further reading
Immune system | Pathogen-associated molecular pattern | [
"Biology"
] | 995 | [
"Immune system",
"Organ systems"
] |
6,330,972 | https://en.wikipedia.org/wiki/Therapeutic%20gene%20modulation | Therapeutic gene modulation refers to the practice of altering the expression of a gene at one of various stages, with a view to alleviate some form of ailment. It differs from gene therapy in that gene modulation seeks to alter the expression of an endogenous gene (perhaps through the introduction of a gene encoding a novel modulatory protein) whereas gene therapy concerns the introduction of a gene whose product aids the recipient directly.
Modulation of gene expression can be mediated at the level of transcription by DNA-binding agents (which may be artificial transcription factors), small molecules, or synthetic oligonucleotides. It may also be mediated post-transcriptionally through RNA interference.
Transcriptional gene modulation
An approach to therapeutic modulation utilizes agents that modulate endogenous transcription by specifically targeting those genes at the gDNA level. The advantage to this approach over modulation at the mRNA or protein level is that every cell contains only a single gDNA copy. Thus the target copy number is significantly lower allowing the drugs to theoretically be administered at much lower doses.
This approach also offers several advantages over traditional gene therapy. Directly targeting endogenous transcription should yield correct relative expression of splice variants. In contrast, traditional gene therapy typically introduces a gene which can express only one transcript, rather than a set of stoichiometrically-expressed spliced transcript variants. Additionally, virally-introduced genes can be targeted for gene silencing by methylation which can counteract the effect of traditional gene therapy. This is not anticipated to be a problem for transcriptional modulation as it acts on endogenous DNA.
There are three major categories of agents that act as transcriptional gene modulators: triplex-forming oligonucleotides (TFOs), synthetic polyamides (SPAs), and DNA binding proteins.
Triplex-forming oligonucleotides
What are they
Triplex-forming oligonucleotides (TFO) are one potential method to achieve therapeutic gene modulation. TFOs are approximately 10-40 base pairs long and can bind in the major groove in duplex DNA which creates a third strand or a triple helix. The binding occurs at polypurine or polypyrimidine regions via Hoogsteen hydrogen bonds to the purine (A / G) bases on the double stranded DNA that is already in the form of the Watson-Crick helix.
How they work
TFOs can be either polypurine or polypyrimidine molecules and bind to one of the two strands in the double helix in either parallel or antiparallel orientation to target polypurine or polypyrimidine regions. Since the DNA-recognition codes are different for the parallel and the anti-parallel fashion of TFO binding, TFOs composed of pyrimidines (C / T) bind to the purine-rich strand of the target double helix via Hoogsteen hydrogen bonds in a parallel fashion. TFOs composed of purines (A / G), or mixed purine and pyrimidine bind to the same purine-rich strand via reverse Hoogsteen bonds in an anti-parallel fashion. TFO's can recognize purine-rich target strands for duplex DNA.
Complications and limitations
In order for TFO motifs to bind in a parallel fashion and create hydrogen bonds, the nitrogen atom at position 3 on the cytosine residue needs to be protonated, but at physiological pH levels it is not, which could prevent parallel binding.
Another limitation is that TFOs can only bind to purine-rich target strands and this would limit the choice of endogenous gene target sites to polypurine-polypyrimidine stretches in duplex DNA. If a method to also allow TFOs to bind to pyrimidine bases was generated, this would enable TFOs to target any part of the genome. Also the human genome is rich in polypurine and polypyrimidine sequences which could affect the specificity of TFO to bind to a target DNA region. An approach to overcome this limitation is to develop TFOs with modified nucleotides that act as locked nucleic acids to increase the affinity of the TFO for specific target sequences.
Other limitations include concerns regarding binding affinity and specificity, in vivo stability, and uptake into cells. Researchers are attempting to overcome these limitations by improving TFO characteristics through chemical modifications, such as modifying the TFO backbone to reduce electrostatic repulsions between the TFO and the DNA duplex. Also due to their high molecular weight, uptake into cells is limited and some strategies to overcome this include DNA condensing agents, coupling of the TFO to hydrophobic residues like cholesterol, or cell permeabilization agents.
What can they do
Scientists are still refining the technology to turn TFOs into a therapeutic product and much of this revolves around their potential applications in antigene therapy. In particular they have been used as inducers of site-specific mutations, reagents that selectively and specifically cleave target DNA, and as modulators of gene expression. One such gene sequence modification method is through the targeting DNA with TFOs to active a target gene. If a target sequence is located between two inactive copies of a gene, DNA ligands, such as TFOs, can bind to the target site and would be recognized as DNA lesions. To fix these lesions, DNA repair complexes are assembled on the targeted sequence, the DNA is repaired. Damage of the intramolecular recombination substrate can then be repaired and detected if resection goes far enough to produce compatible ends on both sides of the cleavage site and then 3' overhangs are ligated leading to the formation of a single active copy of the gene and the loss of all the sequences between the two copies of the gene.
In model systems TFOs can inhibit gene expression at the DNA level as well as induce targeted mutagenesis in the model. TFO-induced inhibition of transcription elongation on endogenous targets have been tested on cell cultures with success. However, despite much in vitro success, there has been limited achievement in cellular applications potentially due to target accessibility.
TFOs have the potential to silence silence gene by targeting transcription initiation or elongation, arresting at the triplex binding sites, or introducing permanent changes in a target sequence via stimulating a cell's inherent repair pathways. These applications can be relevant in creating cancer therapies that inhibit gene expression at the DNA level. Since aberrant gene expression is a hallmark of cancer, modulating these endogenous genes' expression levels could potentially act as a therapy for multiple cancer types.
Synthetic polyamides
Synthetic polyamides are a set of small molecules that form specific hydrogen bonds to the minor groove of DNA. They can exert an effect either directly, by binding a regulatory region or transcribed region of a gene to modify transcription, or indirectly, by designed conjugation with another agent that makes alterations around the DNA target site.
Structure
Specific bases in the minor groove of DNA can be recognized and bound by small synthetic polyamides (SPAs). DNA-binding SPAs have been engineered to contain three polyamide amino acid components: hydroxypyrrole (Hp), imidazole (Im), and pyrrole (Py). Chains of these amino acids loop back on themselves in a hairpin structure. The amino acids on either side of the hairpin form a pair which can specifically recognize both sides of a Watson-Crick base pair. This occurs through hydrogen bonding within the minor groove of DNA. The amide pairs Py/Im, Py/Hp, Hp/Py, and Im/Py recognize the Watson-Crick base pairs C-G, A-T, T-A, and G-C, respectively (Table 1). See figure for a graphical representation of 5'-GTAC-3' recognition by a SPA. SPAs have low toxicity, but have not yet been used in human gene modulation.
Limitations and workarounds
The major structural drawback to unmodified SPAs as gene modulators is that their recognition sequence cannot be extended beyond 5 Watson-Crick base pairings. The natural curvature of the DNA minor groove is too tight a turn for the hairpin structure to match. There are several groups with proposed workarounds to this problem. SPAs can be made to better follow the curvature of the minor groove by inserting beta-alanine which relaxes the structure. Another approach to extending the recognition length is to use several short hairpins in succession. This approach has increased the recognition length to up to eleven Watson-Crick base pairs.
Direct modulation
SPAs may inhibit transcription through binding within a transcribed region of a target gene. This inhibition occurs through blocking of elongation by an RNA polymerase.
SPAs may also modulate transcription by targeting a transcription regulator binding site. If the regulator is an activator of transcription, this will decrease transcriptional levels. As an example, SPA targeting to the binding site for the activating transcription factor TFIIIA has been demonstrated to inhibit transcription of the downstream 5S RNA. In contrast, if the regulator is a repressor, this will increase transcriptional levels. As an example, SPA targeting to the host factor LSF, which represses expression of the human immunodeficiency virus (HIV) type 1 long terminal repeat (LTR), blocks binding of LSF and consequently de-represses expression of LTR
.
Conjugate modulation
SPAs have not been shown to directly modify DNA or have activity other than direct blocking of other factors or processes. However, modifying agents can be bound to the tail ends of the hairpin structure. The specific binding of the SPA to DNA allows for site-specific targeting of the conjugated modifying agent.
SPAs have been paired with the DNA-alkylating moieties cyclopropylpyrroloindole and chlorambucil that were able to damage and crosslink SV40 DNA. This effect inhibited cell cycling and growth. Chlorambucil, a chemotherapeutic agent, was more effective when conjugated to an SPA than without.
In 2012, SPAs were conjugated to SAHA, a potent histone deacetylase (HDAC) inhibitor. SPAs with conjugated SAHA were targeted to Oct-3/4 and Nanog which induced epigenetic remodeling and consequently increased expression of multiple pluripotency related genes in mouse embryonic fibroblasts.
Designer zinc-finger proteins
What they are/structure
Designer zinc-finger proteins are engineered proteins used to target specific areas of DNA. These proteins capitalize on the DNA-binding capacity of natural zinc-finger domains to modulate specific target areas of the genome. In both designer and natural zinc-finger motifs, the protein consists of two β-sheets and one α-helix. Two histidine residues on the α-helix and two cysteine residues on the β-sheets are bonded to a zinc atom, which serves to stabilize the protein domain as a whole. This stabilization particularly benefits the α-helix in its function as the DNA-recognition and -binding domain. Transcription factor TFIIIA is an example of a naturally-occurring protein with zinc-finger motifs.
How they work
Zinc-finger motifs bind into the major groove of helical DNA, where the amino acid residue sequence on the α-helix gives the motif its target sequence specificity. The domain binds to a seven-nucleotide sequence of DNA (positions 1 through 6 on the primary strand of DNA, plus positions 0 and 3 on the complementary strand), thereby ensuring that the protein motif is highly selective of its target. In engineering a designer zinc-finger protein, researchers can utilize techniques such as site-directed mutagenesis followed by randomized trials for binding capacity, or the in vitro recombination of motifs with known target specificity to produce a library of sequence-specific final proteins.
Effects and impacts on gene modulation
Designer zinc-finger proteins can modulate genome expression in a number of ways. Ultimately, two factors are primarily responsible for the result on expression: whether the targeted sequence is a regulatory region or a coding region of DNA, and whether and what types of effector domains are bound to the zinc-finger domain. If the target sequence for an engineered designer protein is a regulatory domain - e.g., a promoter or a repressor of replication - the binding site for naturally-occurring transcription factors will be obscured, leading to a corresponding decrease or increase, respectively, in transcription for the associated gene. Similarly, if the target sequence is an exon, the designer zinc-finger will obscure the sequence from RNA polymerase transcription complexes, resulting in a truncated or otherwise nonfunctional gene product.
Effector domains bound to the zinc-finger can also have comparable effects. It is the function of these effector domains which are arguably the most important with respect to the use of designer zinc-finger proteins for therapeutic gene modulation. If a methylase domain is bound to the designer zinc-finger protein, when the zinc-finger protein binds to the target DNA sequence an increase in methylation state of DNA in that region will subsequently result. Transcription rates of genes so-affected will be reduced. Many of the effector domains function to modulate either the DNA directly - e.g. via methylation, cleaving, or recombination of the target DNA sequence - or by modulating its transcription rate - e.g. inhibiting transcription via repressor domains that block transcriptional machinery, promoting transcription with activation domains that recruit transcriptional machinery to the site, or histone- or other epigenetic-modification domains that affect chromatin state and the ability of transcriptional machinery to access the affected genes. Epigenetic modification is a major theme in determining varying expression levels for genes, as explained by the idea that how tightly-wound the DNA strand is - from histones at the local level up to chromatin at the chromosomal level - can influence the accessibility of sequences of DNA to transcription machinery, thereby influencing the rate at which it can be transcribed. If, instead of impacting the DNA strand directly, as described above, a designer zinc-finger protein instead affects epigenetic modification state for a target DNA region, modulation of gene expression could similarly be accomplished.
In the first case to successfully demonstrate the use of designer zinc-finger proteins to modulate gene expression in vivo, Choo et al. designed a protein consisting of three zinc-finger domains that targeted a specific sequence on a BCR-ABL fusion oncogene. This specific oncogene is implicated in acute lymphoblastic leukemia. The oncogene typically enables leukemia cells to proliferate in the absence of specific growth factors, a hallmark of cancer. By including a nuclear localization signal with the tri-domain zinc-finger protein in order to facilitate binding of the protein to genomic DNA in the nucleus, Choo et al. were able to demonstrate that their engineered protein could block transcription of the oncogene in vivo. Leukemia cells became dependent on regular growth factors, bringing the cell cycle back under the control of normal regulation.
Post-transcriptional gene modulation
The major approach to post-transcriptional gene modulation is via RNA interference (RNAi). The primary problem with using RNAi in gene modulation is drug delivery to target cells. RNAi gene modulation has been successfully applied to mice toward the treatment of a mouse model for inflammatory bowel disease. This treatment utilized liposome-based beta-7 integrin-targeted, stabilized nanoparticles entrapping short interfering RNAs (siRNAs). There are several other forms of RNAi delivery, including: polyplex delivery, ligand-siRNA conjugates, naked delivery, inorganic particle deliver using gold nanoparticles, and site specific local delivery.
Clinical significance
Designer zinc-finger proteins, on the other hand, have undergone some trials in the clinical arena. The efficacy and safety of EW-A-401, an engineered zinc-finger transcription factor, as a pharmacologic agent for treating claudication, a cardiovascular ailment, has been investigated in clinical trials. The protein consists of an engineered plasmid DNA that prompts the patient to produce an engineered transcription factor, the target of which is the vascular endothelial growth factor-A (VEGF-A) gene, which positively influences blood vessel development. Although not yet approved by the U.S. Food and Drug Administration (FDA), two Phase I clinical studies have been completed which identify this zinc-finger protein as a promising and safe potential therapeutic agent for treatment of peripheral arterial disease in humans.
See also
Artificial transcription factor
Antisense therapy
Gene therapy
RNA interference
References
Medical genetics
Applied genetics | Therapeutic gene modulation | [
"Biology"
] | 3,495 | [
"Therapeutic gene modulation"
] |
6,333,385 | https://en.wikipedia.org/wiki/C-theorem | In quantum field theory the C-theorem states that there exists a positive real function, , depending on the coupling constants of the quantum field theory considered, , and on the energy scale, , which has the following properties:
decreases monotonically under the renormalization group (RG) flow.
At fixed points of the RG flow, which are specified by a set of fixed-point couplings , the function is a constant, independent of energy scale.
The theorem formalizes the notion that theories at high energies have more degrees of freedom than theories at low energies and that information is lost as we flow from the former to the latter.
Two-dimensional case
Alexander Zamolodchikov proved in 1986 that two-dimensional quantum field theory always has such a C-function. Moreover, at fixed points of the RG flow, which correspond to conformal field theories, Zamolodchikov's C-function is equal to the central charge of the corresponding conformal field theory, which lends the name C to the theorem.
Four-dimensional case: A-theorem
John Cardy in 1988 considered the possibility to generalise C-theorem to higher-dimensional quantum field theory. He conjectured that in four spacetime dimensions, the quantity behaving monotonically under renormalization group flows, and thus playing the role analogous to the central charge in two dimensions, is a certain anomaly coefficient which came to be denoted as .
For this reason, the analog of the C-theorem in four dimensions is called the A-theorem.
In perturbation theory, that is for renormalization flows which do not deviate much from free theories, the A-theorem in four dimensions was proved by Hugh Osborn using the local renormalization group equation. However, the problem of finding a proof valid beyond perturbation theory remained open for many years.
In 2011, Zohar Komargodski and Adam Schwimmer of the Weizmann Institute of Science proposed a nonperturbative proof for the A-theorem, which has gained acceptance. (Still, simultaneous monotonic and cyclic (limit cycle) or even chaotic RG flows are compatible with such flow functions when multivalued in the couplings, as evinced in specific systems.) RG flows of theories in 4 dimensions and the question of whether scale invariance implies conformal invariance, is a field of active research and not all questions are settled.
See also
Conformal field theory
References
Conformal field theory
Renormalization group
Mathematical physics
Theorems in quantum mechanics | C-theorem | [
"Physics",
"Mathematics"
] | 527 | [
"Theorems in quantum mechanics",
"Physical phenomena",
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Theorems in mathematical physics",
"Statistical mechanics",
"Mathematical physics",
"Physics theorem... |
6,334,273 | https://en.wikipedia.org/wiki/Detection%20of%20internally%20reflected%20Cherenkov%20light | In particle detectors a detection of internally reflected Cherenkov light (DIRC) detector measures the velocity of charged particles and is used for particle identification. It is a design of a ring imaging Cherenkov detector where Cherenkov light that is contained by total internal reflection inside the solid radiator has its angular information preserved until it reaches the light sensors at the detector perimeter.
A charged particle travelling through a material (for instance fused silica) with a speed greater than c/n (n refractive index, c vacuum speed of light) emits Cherenkov radiation. If the light angle on the surface is sufficiently shallow, this radiation is contained inside and transmitted through internal reflections to an expansion volume, coupled to photomultipliers (or other types of photon detectors), to measure the angle. Preserving the angle requires a precise planar and rectangular cross section of the radiator. Knowledge of the angle at which the radiation was produced, combined with the track angle and the particle's momentum (measured in a tracking detector like a drift chamber) may be used to calculate the particle's mass.
A DIRC was first proposed by Blair Ratcliff as a tool for particle identification at a B-Factory, and the design was first used by the BaBar collaboration at SLAC. Since the successful operation in the BaBar experiment next-generation DIRC-type detectors have been designed for several new particle physics experiments, including Belle-II, PANDA, and GlueX. The DIRC differs from earlier RICH and CRID Cherenkov light detectors in that the quartz bars used as radiators also transmit the light.
See also
BaBar DIRC homepage
Particle detectors | Detection of internally reflected Cherenkov light | [
"Technology",
"Engineering"
] | 344 | [
"Particle detectors",
"Measuring instruments"
] |
6,334,337 | https://en.wikipedia.org/wiki/B-factory | In particle physics, a B-factory, or sometimes a beauty factory, is a particle collider experiment designed to produce and detect a large number of B mesons so that their properties and behavior can be measured with small statistical uncertainty. Tau leptons and D mesons are also copiously produced at B-factories.
History and development
A sort of "prototype" or "precursor" B-factory was the HERA-B experiment at DESY that was planned to study B-meson physics in the 1990–2000s, before the actual B-factories were constructed/operational. However, usually HERA-B is not considered a B-factory.
Two B-factories were designed and built in the 1990s, and they operated from late 1999 onward: the Belle experiment at the KEKB collider in Tsukuba, Japan, and the BaBar experiment at the PEP-II collider at SLAC in California, United States. They were both electron-positron colliders with the center of mass energy tuned to the ϒ(4S) resonance peak, which is just above the threshold for decay into two B mesons (both experiments took smaller data samples at different center of mass energies). BaBar prematurely ceased data collection in 2008 due to budget cuts, but Belle ran until 2010, when it stopped data collection both because it had reached its intended integrated luminosity and because construction was to begin on upgrades to the experiment (see below).
Current experiments
Three "next generation" B-factories were to be built in the 2010s and 2020s: SuperB near Rome in Italy; Belle II, an upgrade to Belle, and SuperPEP-II, an upgrade to the PEP-II accelerator. SuperB was canceled, and the proposal for SuperPEP-II was never acted upon. However, Belle II successfully started taking data in 2018 and is currently the only next-generation B-factory in operation.
In addition to Belle II there is the LHCb-experiment at the LHC (CERN), which started operations in 2010 and studies primarily the physics of bottom-quark containing hadrons, and thus could be understood to be a B-factory of this "next generation." But LHCb is not usually referred to as a B-factory as the experiment and (perhaps more importantly) the corresponding collider (that is, the LHC) are not used solely for the study of b-quark particles but have other purposes beside b-quark physics.
See also
B– oscillations
b-tagging
HERA-B
KEKB
SuperKEKB
Belle experiment
Belle II
Stanford Linear Accelerator
BaBar experiment
Neutrino Factory
Higgs factory
References
External links
BaBar homepage
Belle homepage
Belle II homepage
Experimental particle physics
Particle experiments
B physics | B-factory | [
"Physics"
] | 576 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
8,310,787 | https://en.wikipedia.org/wiki/Survivin | Survivin, also called baculoviral inhibitor of apoptosis repeat-containing 5 or BIRC5, is a protein that, in humans, is encoded by the BIRC5 gene.
Survivin is a member of the inhibitor of apoptosis (IAP) family. The survivin protein functions to inhibit caspase activation, thereby leading to negative regulation of apoptosis or programmed cell death. This has been shown by disruption of survivin induction pathways leading to increase in apoptosis and decrease in tumour growth. The survivin protein is expressed highly in most human tumours and fetal tissue, but is completely absent in terminally differentiated cells. These data suggest survivin might provide a new target for cancer therapy that would discriminate between transformed and normal cells. Survivin expression is also highly regulated by the cell cycle and is only expressed in the G2-M phase. It is known that Survivin localizes to the mitotic spindle by interaction with tubulin during mitosis and may play a contributing role in regulating mitosis. The molecular mechanisms of survivin regulation are still not well understood, but regulation of survivin seems to be linked to the p53 protein. It also is a direct target gene of the Wnt pathway and is upregulated by beta-catenin.
IAP family of anti-apoptotic proteins
Survivin is a member of the IAP family of antiapoptotic proteins. It is shown to be conserved in function across evolution as homologues of the protein are found both in vertebrates and invertebrates. The first members of the IAPs identified were from the baculovirus IAPs, Cp-IAP and Op-IAP, which bind to and inhibit caspases as a mechanism that contributes to its efficient infection and replication cycle in the host. Later, five more human IAPs that included XIAP, c-IAPl, C-IAP2, NAIP, and survivin were discovered. Survivin, like the others, was discovered by its structural homology to IAP family of proteins in human B-cell lymphoma. The human IAPs, XIAP, c-IAPl, C-IAP2 have been shown to bind to caspase-3 and -7, which are the effector caspases in the signaling pathway of apoptosis. It is not known with absolute certainty though, how the IAPs inhibit apoptosis mechanistically at the molecular level.
A common feature that is present in all IAPs in the presence of a BIR (Baculovirus IAP Repeat, a ~70 amino acid motif) in one to three copies. It was shown by Tamm et al. that knocking out BIR2 from XIAP was enough to cause a loss of function in terms of XIAPs ability to inhibit caspases. This gives the implication that it is within these BIR motifs that contains the anti-apoptotic function of these IAPs. Survivin's one BIR domain shows a similar sequence compared to that of XIAP's BIR domains.
Isoforms
The single survivin gene can give rise to four different alternatively spliced transcripts:
Survivin, which has a three-intron–four-exon structure in both the mouse and human.
Survivin-2B, which has an insertion of an alternative exon 2.
Survivin-Delta-Ex-3, which has exon 3 removed. The removal of exon 3 results in a frame shift that generates a unique carboxyl terminus with a new function. This new function may involve a nuclear localization signal. Moreover, a mitochondrial localization signal is also generated.
Survivin-3B, which has an insertion of an alternative exon 3.
Structure
A structural feature common to all IAP family proteins is that they all contain at least one baculoviral IAP repeat (BIR) domain characterized by a conserved zinc-coordinating Cys/His motif at the N-terminal half of the protein.
Survivin is distinguished from other IAP family members in that it has only one BIR domain. The mice and human BIR domain of survivin are very similar structurally except for two differences that may affect function variability. The human survivin also contains an elongated C-terminal helix comprising 42 amino acids. Survivin is 16.5 kDa large and is the smallest member of the IAP family.
X-ray crystallography has shown two molecules of human survivin coming together to form a bowtie-shape dimer through a hydrophobic interface. This interface includes N-terminal residues 6-10 just before the BIR domain region and the 10 residue region connecting the BIR domain to the C-terminal helix. The structural integrity of the determined crystal structure of survivin is quite reliable, as physiological conditions were used to obtain the images.
Function
Apoptosis
Apoptosis, the process of programmed cell death, involves complex signaling pathways and cascades of molecular events. This process is needed for proper development during embryonic and fetal growth where there is destruction and reconstruction of cellular structures. In adult organisms, apoptosis is needed to maintain differentiated tissue by striking the balance between proliferation and cell death. It is known that intracellular proteases called caspases degrade the cellular contents of the cell by proteolysis upon activation of the death pathway.
Mammalian cells have two main pathways that lead to apoptosis.
1. Extrinsic pathway: Initiated by extrinsic ligands binding to death receptors on the surface of the cell. An example of this is the binding of tumour necrosis factor-alpha (TNF-alpha) to TNF-alpha receptor. An example of a TNF receptor is Fas (CD95), which recruits activator caspases like caspase-8 upon binding TNF at the cell surface. The activation of the initiator caspases then initiates a downstream cascade of events that results in the induction of effector caspases that function in apoptosis.
2. Intrinsic pathway: This pathway is initiated by intracellular or environmental stimuli. It is focused on detecting the improper functioning of the mitochondria in the cell and, as a result, activates signaling pathways to commit suicide. The membrane permeability of the mitochondria increases and particular proteins are released into the cytoplasm that facilitates the activation of initiator caspases. The particular protein released from the mitochondria is cytochrome c. Cytochrome c then binds to Apaf-1 in the cytosol and results in the activation of initiator caspase-9. The activation of the initiator caspases then initiates a downstream cascade of events that results in the induction of effector caspases that function in apoptosis.
One family of proteins called IAPs plays a role in regulating cell death by inhibiting the process. IAPs like survivin, inhibit apoptosis by physically binding to and inhibiting proper caspase function. The function of IAPs is evolutionarily conserved as Drosophila homologues of IAPs have been shown to be essential for cell survival.
IAPs have been implicated in studies to have a regulatory effect on cell division. Yeast cells with knock-outs of certain IAP genes did not show problems associated with cell death, but showed defects in mitosis characterized by improper chromosome segregation or failed cytokinesis.
Deletion of particular IAPs does not seem to have a profound effect on the cell-death pathway as there is a redundancy of function by the many IAPs that exist in a cell. They have been implicated, however, to play a role in maintaining an anti-apoptotic environment intracellularly. Changing the expression of particular IAPs has shown an increase in spontaneous cell death induction or increased sensitivity to death stimuli.
Mechanism of action
Inhibition of Bax and Fas-induced apoptosis
Tamm et al. have shown that survivin inhibits both Bax and Fas-induced apoptotic pathways. The experiment involved transfecting HEK 293 cells with a Bax-encoding plasmid, which resulted in an increase in apoptosis (~7 fold) as measured by DAPI staining. They then contransfected the 293 cells with Bax-encoding plasmid and survivin-encoding plasmids. They observed that cells transfected along with the survivin showed a significant decrease in apoptosis (~3 fold). A similar result also showed for cells transfected with the Fas-overexpressing plasmid. Immunoblots were performed and confirmed that survivin does not inhibit by mechanism of preventing Bax or Fas protein from being made into fully functional proteins. Therefore, survivin should be acting somewhere downstream of the Bax or Fas signaling pathway to inhibit apoptosis through these pathways.
Interaction with caspase-3 and -7
In this part of the experiment, Tamm et al. transfected 293 cells with survivin and lysed them to obtain cell lysate. The lysates were incubated with different caspase forms and survivin was immunopercipitated with anti-survivin antibody. The idea behind this is that, if survivin binds physically with the caspase it is incubated with, it will be co-precipitated along with the survivin while everything else in the lysate is washed away. The immunoprecipitates were then run on SDS-PAGE and then immunoblotted for detection of the desired caspase. If the caspase of interest was detected, it meant that it was bound to survivin in the immunoprecipitation step implicating that survivin and the particular caspase had bound beforehand. Active caspase-3 and -7 coimmunoprecipitated with survivin. The inactive proforms of caspase-3 and -7 did not bind survivin. Survivin also does not bind to active caspase-8. Caspase-3 and -7 are effector proteases whereas caspase-8 is an initiator caspase that sits more upstream in the apoptotic pathway. These results demonstrate survivin's capability to bind with particular caspases in vitro, but may not necessarily translate over to actual physiological conditions. Later, a 2001 study confirmed that human survivin tightly binds caspase-3 and -7 when expressed in E. coli.
Further evidence to support the idea that survivin blocks apoptosis by directly inhibiting caspases was given by Tamm et al. 293 cells were transfected with either overexposed caspase-3 or -7 encoding plasmid and with survivin. They showed that survivin inhibited processing of these two caspases into their active forms. While survivin has been shown as mentioned above to bind to only the active forms of these caspases, it is likely here that survivin inhibits the active forms of the caspases resulting from cleaving and activating more of its own proforms. Thus, survivin acts possibly by preventing such a cascade of cleavage and activation amplification from happening resulting in decreased apoptosis.
In similar manner, looking at the mitochondrial pathway of apoptosis, cytochrome c was transiently expressed in 293 cells to look at the inhibitory effects survivin had on this pathway. Although the details are not here, survivin was shown to also inhibit cytochrome c and caspase-8-induced activation of caspases.
Regulation of cytokinesis
While the mechanism by which survivin may regulate cell mitosis and cytokinesis is not known, the observations made on its localization during mitosis suggests strongly that it is involved in some way in the cytokinetic process.
Proliferating Daoy cells were placed on a glass coverslip, fixed and stained with fluorescent antibodies for survivin and alpha-tubulin. Immunoflourescence using confocal microscopy was used to look at the localization of survivin and tubulin during the cell-cycle to look for any patterns of survivin expression. Survivin was absent in interphase, but present in the G2-M phase.
During the different stages of mitosis, one could see that survivin follows a certain localization pattern. At prophase and metaphase, survivin is mainly nuclear in location. During prophase, as the chromatin condenses so that it is visible under the microscope, survivin starts to move to the centromeres. At prometaphase when the nuclear membrane dissociates and spindle microtubules cross over the nuclear region, survivin stays put at the centromeres. At metaphase, when the chromosomes align at the middle plate and are pulled with high tension to either pole by the kinetochore attachments, survivin then associates with the kinetochores. At anaphase as separation of the chromatids happens, the kinetochore microtubules shorten as the chromosomes move towards to the spindle poles and survivin also moves along to the midplate. Survivin thus accumulates at the midplate at telophase. Finally, survivin localizes to the midbody at the cleavage furrow.
Interaction and localization to the mitochondria
It has been shown that survivin can heterodimerize individually with the two splice variants Survivin-2B and survivin-deltaEx3. Evidence of the heterodimerization of survivin splice variants with survivin was shown with co-immunoprecipitation experiments after cotransfection with the respective survivin variants with survivin. To determine the localization of exogenously expressed survivin-2B and survivin-deltaEx3, fusion constructs of the proteins were made with GFP and HcRed respectively and Daoy cells were transfected with the plasmid constructs. Survivin was also tagged with a fluorescent protein. The fusion of the survivin variants with the fluorescent molecules allows for simple detection of cellular location by fluorescence microscopy. Survivin-2B by itself, localized to both nuclear and cytoplasmic compartments whereas survivin-deltaEx3 localized only in the nucleus. The localization of the three variants (survivin, Survivin-2B, and survivin-deltaEx3) differ, however, when cotransfected together rather than individually.
To see which subcellular compartments contained the survivin splice variants complexes, fluorescent antibody markers for different organelles in the cell were employed. The assumption is that, under fluorescence microscopy, if the particular survivin complex is located in that particular cell compartment, one would observe an overlap from the fluorescence given off by the tagged survivin complex and the tagged compartment as well. Different color fluorescence is used to distinguish compartment from survivin.
Endoplasmic reticulum and lyosomes: no colocalization
Mitochondria and golgi: both survivin/survivin-2B and survivin/survivin-deltaEx3 colocalize
To verify these observations, they fractionated the subcellular compartments and performed western blot analysis to definitively say that survivin complexes did indeed localize at these compartments.
Role in cancer
Expression in different carcinomas
Survivin is known to be expressed during fetal development and across most tumour cell types, but is rarely present in normal, non-malignant adult cells. Tamm et al. showed that survivin was expressed in all 60 different human tumour lines used in the National Cancer Institute's cancer drug-screening program, with the highest levels of expression in breast and lung cancer lines and the lowest levels in renal cancers. Knowing the relative expression levels of survivin in different tumour types may prove helpful as survivin-related therapy may be administered depending on the expression level and reliance of the tumour type on survivin for resistance to apoptosis.
As an oncogene
Survivin can be regarded as an oncogene as its aberrant overexpression in most cancer cells contributes to their resistance to apoptotic stimuli and chemotherapeutic therapies, thus contributing to their ongoing survival.
Genomic instability
Most human cancers have been found to have gains and losses of chromosomes that may be due to chromosomal instability (CIN). One of the things that cause CIN is the inactivation of genes that control the proper segregation of the sister chromatids during mitosis. In gaining a better understanding of survivin's function in mitotic regulation, scientists have looked into the area of genomic instability. It is known that survivin associates with microtubules of the mitotic spindle at the start of mitosis.
It has been shown in the literature that knocking out survivin in cancer cells will disrupt microtubule formation and result in polyploidy as well as massive apoptosis. It has also been shown that survivin-depleted cells exit mitosis without achieving proper chromosome alignment and then reforms single tetraploid nuclei. Further evidence also suggests that survivin is needed for sustaining mitotic arrest upon encounter with mitosis problems. The evidence mentioned above implicates that survivin plays an important regulatory role both in the progression of mitosis and sustaining mitotic arrest. This seems strange, as survivin is known to be highly upregulated in most cancer cells (that usually contain chromosome instability characteristics), and its function is that which promotes proper regulation of mitosis.
Regulation by p53
p53 inhibits survivin expression at the transcriptional level
Wild-type p53 has been shown to repress survivin expression at the mRNA level. Using an adenovirus vector for wild-type p53, human ovarian cancer cell line 2774qw1 (which expresses mutant p53) was transfected. mRNA levels of survivin were analyzed by real-time quantitative PCR (RT-PCR) and showed time-dependent down regulation of survivin mRNA levels when the cells were infected with wild-type p53. A 3.6 fold decrease of survivin mRNA level was observed 16 hours after infection initiation and decreased 6.7 fold 24 hours after infection. Western blot results do show that there is indeed the p53 from the adenoviral vector was being expressed in the cells using antibody specific for p53. The expression of p53 levels indicative of its role in survivin repression shows that p53 started to be expressed 6 hours into infection and had its highest level at 16–24 hours. To further confirm that endogenous wild-type p53 is really causing the repression of survivin gene expression, the authors induced A549 (human lung cancer cell line with wild-type p53) and T47D (human breast cancer cell line with mutant p53) cells with DNA-damaging agent adriamycin to trigger the physiological p53 apoptotic response in these cancer cells and compare the survivin levels measured to the same cells without DNA damage induction. The A549 line, which intrinsically has functioning wild-type p53, showed significant reduction in survivin levels compared to non-induced cells. This same effect was not seen in T47D cells that carry mutant inactive p53.
P53's normal function is to regulate genes that control apoptosis. As survivin is a known inhibitor of apoptosis, it can be implied that p53 repression of survivin is one mechanism by which cells can undergo apoptosis upon induction by apoptotic stimuli or signals. When survivin is over-expressed in the cell lines mentioned in the previous paragraph, apoptotic response from DNA-damaging agent adriamycin decreased in a dose-dependent manner. This suggests that down-regulation of survivin by p53 is important for p53-mediated apoptotic pathway to successfully result in apoptosis. It is known that a defining characteristic of most tumors is the over-expression of survivin and the complete loss of wild-type p53. The evidence put forth by Mirza et al. shows that there exists a link between survivin and p53 that can possibly explain a critical event that contributes to cancer progression.
p53 suppression of survivin expression
In order to see whether p53 re-expression in cancer cells (that have lost p53 expression) has the suppressive effect on the promoter of the survivin gene, a luciferase reporter construct was made. The isolated survivin promoter was placed upstream of the luciferase reporter gene. In a luciferase reporter assay, if the promoter is active, the luciferase gene is transcribed and translated into a product that gives off light that can measured quantitatively and, thus, represents the activity of the promoter. This construct was transfected into cancer cells that had either wild-type or mutant p53. High luciferase activity was measured in the cells with mutant p53 and significantly lower luciferase levels were measured for cells with wild-type p53.
Transfection of different cell types with wild-type p53 was associated with a strong repression of the survivin promoter. Transfection with mutant p53 was not shown to strongly repress the survivin promoter. More luciferase constructs were prepared with varying degrees of deletion from the 5' end of the survivin promoter region. At one point, there was deletion that caused the survivin levels to be indifferent to the presence of the p53 over-expression plasmid, indicating that there is a specific region proximal to the transcription start site that is needed for p53 suppression of survivin. Although it has been found that two p53 binding sites are located on the survivin gene promoter, analysis using deletions and mutations has shown that these sites are not essential to transcriptional inactivation.
Instead, it is observed that modification of the chromatin inside of the promoter region may be responsible for the transcriptional repression of the survivin gene. This is explained below in the epigenetic regulation section.
Cell cycle regulation
Survivin is shown to be clearly regulated by the cell cycle, as its expression is found to be dominant only in the G2/M phase. This regulation exists at the transcriptional level, as there is evidence of the presence of cell-cycle-dependent element/cell-cycle gene homology region (CDE/CHR)boxes located in the survivin promoter region. Further evidence to support this mechanism of regulation includes the evidence that surivin is poly-ubiquinated and degraded by proteasomes during interphase of the cell cycle. Moreover, survivin has been shown to localize to components of the mitotic spindle during metaphase and anaphase of mitosis. Physical association between polymerized tubulin and survivin have been shown in vitro as well. It is also shown that post-transcriptional modification of survivin involving the phosphorylation of Thr34 leads to increased protein stability in the G2/M phase of the cell cycle.
It is known from Mirza et al. that repression of survivin by p53 is not a result of any cell cycle progressive regulation. The same experiment by Mirza et al. with regard to determining p53 suppression of survivin at the transcriptional level was repeated, but this time for cells arrested in different stages of the cell cycle. It was shown that, although p53 arrests the numbers of cells to different extents in different phases, the measured level of survivin mRNA and protein levels were the same across all the samples transfected with the wild-type p53. This shows that p53 acts in a cell-cycle independent manner to inhibit survivin expression.
Epigenetic and genetic regulation
As observed through the literature, survivin is found to be over-expressed across many tumour types. Scientists are not sure of the mechanism that causes this abnormal over-expression of survivin; however, p53 is downregulated in almost all cancers, so it is tempting to suggest that survivin over-expression is due to p53 inactivity. Wagner et al. investigated the possible molecular mechanism involved with the over expression of survivin in acute myeloid leukemia (AML). In their experiments, they did both an epigenetic and a genetic analysis of the survivin gene promoter region in AML patients and compared the observations to what was seen in peripheral blood mononuclear cells (PBMCs) that have been shown to express no survivin. Assuming that the molecular mechanism of survivin re-expression in cancerous cells is at the transcriptional level, the authors decided to look at particular parts of the promoter region of survivin in order to see what happens in cancer cells that does not happen in normal cells that causes such a high level of survivin to be expressed. With regards to an epigenetic mechanism of survivin gene regulation, the authors measured the methylation status of the survivin promoter, since it is accepted that methylation of genes plays an important role in carcinogenesis by silencing of certain genes or vice versa. The authors used methylation specific polymerase chain reaction with bisulfite sequencing methods to measure the promoter methylation status in AML and PBMCs and found unmethylated survivin promoters in both groups. This result shows that DNA methylation status is not an important regulator of survivin re-expression during leukemogenesis. However, De Carvalho et al. performed a DNA methylation screening and identified that DNA methylation of IRAK3 plays a key role in survivin up-regulation in different types of Cancer, suggesting that epigenetic mechanisms plays an indirect role on abnormal over-expression of survivin. With regard to genetic analysis of the survivin promoter region, the isolated DNA of AML and PBMCs were treated with bisulfite, and the survivin promoter region sequence was amplified out with PCR and sequenced to look for any particular genetic changes in the DNA sequence between the two groups. Three single-nucleotide polymorphisms (SNPs) were identified and were all present both in AML patients and in healthy donors. This result suggests that the occurrence of these SNPs in the promoter region of the survivin gene also appears to be of no importance to survivin expression. However, it has not been ruled out yet that there may be other possible epigenetic mechanisms that may be responsible for a high level of survivin expression observed in cancer cells and not in normal cells. For example, the acetylation profile of the survivin promoter region can also be looked at. Different cancer and tissue types may have slight or significant differences in the way survivin expression is regulated in the cell, and, thus, the methylation status or genetic differences in the survivin promoter may be observed to be different in different tissues. Thus, further experiments assessing the epigenetic and genetic profile of different tumour types must be investigated.
As a drug target
Expression in cancer as a tool for cancer-directed therapy
Survivin is known to be highly expressed in most tumour cell types and absent in normal cells, making it a good target for cancer therapy. The exploitation of survivin's over-active promoter in most cancer cell types allows for the delivery of therapeutics only in cancer cells and removed from normal cells.
Small interfering RNA (siRNA) are synthetic antisense oligonucleotides to the mRNA of the gene of interest that works to silence the expression of a particular gene by its complementary binding. siRNAs, such as LY2181308, bound to the respective mRNA results in disruption of translation of that particular gene and thus the absence of that protein in the cell. Thus, the use of siRNAs has great potential to be a human therapeutic, as it can target and silence the expression of potentially any protein you want. A problem arises when siRNA expression in a cell cannot be controlled, allowing its constitutive expression to cause toxic side-effects. With regard to practical treatment of cancer, it is required to either deliver the siRNAs specifically into cancer cells or control the siRNA expression. Previous methods of siRNA therapy employ the use of siRNA sequences cloned into vectors under the control of constitutively active promoters. This causes a problem, as this model is non-specific to cancer cells and damages normal cells too. Knowing that survivin is over-expressed specifically in cancer cells and absent in normal cells, one can imply that the survivin promoter is active only in cancer cells. Thus, the exploitation of this difference between cancer cells and normal cells will allow appropriate therapy directed only at the cells in a patient that are harmful. In an experiment to demonstrate this idea, Trang et al. have created a cancer-specific vector expressing siRNA for green fluorescent protein (GFP) under the human survivin promoter. MCF7 breast cancer cells were cotransfected with this vector and a GFP-expressing vector as well. Their major finding was that MCF7 cells transfected with the siRNA vector for GFP under the survivin promoter had a significant reduction in GFP expression then the cells transfected with the siRNA vector under a cancer non-specific promoter. Moreover, normal non-cancerous cells transfected in the same way mentioned above showed no significant reduction in GFP expression. This is implying that, in normal cells, survivin promoter is not active, and, thus, the siRNA will not be expressed under an inactive survivin promoter.
Antisense oligonucleotides targeting survivin mRNA
As it is known that survivin is over-expressed in most cancers, which may be contributing to the cancer cells' resistance to apoptotic stimuli from the environment. The use of antisense survivin therapy hopes to render cancer cells susceptible to apoptosis by eliminating survivin expression in the cancer cells.
Olie et al. developed different 20-mer phosphorothioate antisense oligonucleotides that target different regions in the mRNA of the survivin gene. The antisense function of the oligonucleotides allows binding to surviving mRNA and, depending on the region on which it binds, might inhibit surviving mRNA from being translated into a functional protein. Real-time PCR was used to assess the levels of mRNA present in a lung adenocarcinoma cell line A549 that overexpresses survivin. The best antisense oligonucleotide was identified that effectively down-regulated survivin mRNA levels and resulted in apoptosis of the cells. Survivin's role in cancer development in the context of a signaling pathway is its ability to inhibit activation of downstream caspase-3 and -7 from apoptosis inducing stimuli. The overexpression of survivin in tumors may serve to increase the tumors resistance to apoptosis and, thus, contribute to cell immortality even in the presence of death stimuli. In this experiment, the oligonucleotide 4003 that targets nucleotides 232-251 of survivin mRNA was found to be the most effective at down-regulating the levels of survivin mRNA in the A549 tumour line. The 4003 oligonucleotides were introduced into the tumour cells by transfection. Further experiments were then conducted on 4003. One of the additional experiments involved determining the dose-dependent effect of 4003 on the down-regulation of survivin mRNA levels. It was found that a concentration of 400 nM resulted in a maximum down-regulation of 70% of the initial survivin mRNA present. Another experiment on 4003 involved assessing any biological or cytotoxic effect 4003 down-regulation of survivin mRNA has on A549 cells using the MTT assay. The numbers of A549 cells transfected with 4003 significantly decreased with increasing concentration of 4003 compared to cells transfected either with a mismatch form of the 4003 or lipofectin control. Many physical observations that confirmed the induction of apoptosis by 4003 were made. For example, lysates of the 4003-treated cells showed increased levels of caspase-3-like protease activity; nuclei were observed to be condensed and chromatin was fragmented.
Cancer immunotherapy
Survivin has been a target of attention in recent years for cancer immunotherapy, as it is an antigen that is expressed mostly in cancer cells and absent in normal cells. This is because survivin is deemed to be a crucial player in tumour survival. There has been much evidence accumulated over the years that shows survivin as a strong T-cell-activating antigen, and clinical trials have already been initiated to prove its usefulness in the clinic.
Activation of the adaptive immune system
A. Cellular T cell response
The first evidence of survivin-specific CTL recognition and killing was shown in an assay wherein cytotoxic T cells (CTLs) induced lysis of B cells transfected to present survivin peptides on its surface. The naive CD8+ T cells were primed with dendritic cells and could therefore recognize the specific peptides of survivin presented on the surface Major Histocompatibility Complex I (MHC I) molecules of the B cells.
B. Humoral antibody response
Taking blood samples from cancer patients, scientists have found antibodies that are specific for survivin. These antibodies were absent in the blood samples of healthy normal patients. Therefore, this shows that survivin is able to elicit a full humoral immune response. This may prove useful, as one could measure the level of survivin-specific antibodies in the patient's blood as a monitor of tumour progression. In acquiring the humoral response to tumour antigens such as survivin, CD4+ T cells are activated to induce B cells to produce antibodies directed against the particular antigens.
The isolation of the antibodies specific for survivin peptides is useful, as one can look at the structure and sequence of the epitope binding groove of the antibody and, therefore, deduce possible epitopes that may fit in that particular antibody groove. Therefore, one can determine the particular peptide portion of the survivin protein that is bound most efficiently and most commonly by humoral antibodies generated against survivin. This will lead to the production of more specific survivin vaccines that contain a specific portion of the survivin protein that is known to elicit a good immune response, generate immune memory, and allow for protection from tumour development.
Over-expression in tumours and metastatic tissues
Xiang et al. found a new approach in inhibiting tumour growth and metastasis by simultaneously attacking both the tumour and its vasculature by a cytotoxic T cell (CTL) response against the survivin protein, which will later result in the activation of apoptosis in tumour cells.
The idea and general principle behind his technique is described below. Mice were immunized with the oral vaccination and then subjected to tumour challenges by injecting them in the chest with a certain number of tumour cells and a Matrigel pre-formed extracellular matrix to hold the tumour cells together. The mice were sacrificed and the endothelium tissue was stained with a fluorescent dye that would aid in the quantification of tumour neovascularisation using a Matrigel assay. There was found to be a significant difference between the control and test groups, whereby mice given the vaccine had less angiogenesis from the tumour challenge than the control mice that were not given any of the vaccine prior to tumour challenge. In vitro assays and other tests were also performed to validate the idea of the occurrence of an actual immune response to support what they observed in the mice. For example, the spleen on the challenged mice were isolated and measured for the presence of any cytokines, and specifically activated immune cell groups that would indicative that a specific immune response did occur upon vaccination. The isolated CTLs specific for the survivin protein after vaccination of the mice were used in cytoxicity assays where mice tumour cells expressing survivin were shown to be killed upon incubation with the specific CTLs.
By using an oral DNA vaccine carried in an attenuated non-virulent form of Salmonella typhimurium, which co-encoded secretory chemokine CCL21 and survivin protein in C57BL/6J mice, Xiang et al. have been able to elicit an immune response carried out by dendritic cells (DCs) and CTLs to eliminate and suppress the pulmonary metastases of non-small cell lung carcinoma. The activation of the immune response is most likely taking place in the secondary lymphoid organ called the Peyer's Patch in the small intestine where DCs take up the survivin protein by phagocytosis and present them on their surface receptors to naive CD8+ T cells (uninactivated CTL) to achieve a specific immune response targeting survivin exclusively. Activated CTLs specific for a particular antigen kill their target cells by first recognizing parts of the survivin protein expressed on MHC I (immunohistocompatability) proteins presented on the surface of tumour cells and vasculature and then releasing granules that induce the tumour cells to undergo apoptosis. The DNA vaccine contained the CCL21 secretory chemokine as a way to enhance the likelihood of eliciting the immune response by better mediating the physical interaction of the antigen-presenting DCs and the naive CD8+ T cells, resulting in a greater likelihood of immune activation.
Resveratrol-mediated sensitization
It has been shown by Fulda et al. that the naturally occurring compound resveratrol (a polyphenol found in grapes and red wine) can be used as a sensitizer for anticancer drug-induced apoptosis by the action of causing cell cycle arrest. This cell cycle arrest causes a dramatic decline in survivin levels in the cells, as it is known from the literature that survivin expression is highly linked with the cell cycle phase state. Thus, the decrease in survivin, which is a contributing factor to chemotherapy resistance and apoptosis induction therapies, would render the cancer cells more prone to such cancer treatments. Fulda et al. have demonstrated the benefits of resveratrol through a series of experiments. First, the authors of the paper tested the intrinsic cytotoxic effects of resveratrol. They found that it induced moderate apoptosis levels only in SHEP neuroblastoma cells. After, they tested resveratrol in combination with several different known anticancer agents. They found a consistent increase in the level of apoptosis induced by the drugs when resveratrol was also present. Moreover, they varied the order with which either the drugs or resveratrol was introduced to the cancer cells to determine whether the sequence of treatment had any important effect. It was found that the highest levels of apoptosis induction were observed when resveratrol was added prior to anticancer drug treatment. Next, the authors tested for any differential sensitivity to apoptosis linked to the phase of the cell cycle the cells were in. Analysis by flow cytometry revealed an accumulation of cells in S phase upon treatment with resveratrol. The cells were also halted in different phases of the cell cycle using special compounds and then treated with the anticancer drugs. They found that cells halted in S phase were significantly more sensitive to the cytotoxic effects of the drugs.
To determine the involvement of survivin in resveratrol-mediated sensitization, the authors decided to test whether downregulation of the specific survivin protein expression would confer a similar effect on the phenotype of resveratrol-treated cells. In terms of seeing at which level resveratrol worked, they did a northern blot and found that resveratrol treatment resulted in a decrease in survivin mRNA levels, thus implying resveratrol's inhibitory action at the transcriptional level. To further see whether survivin played a key role in sensitization of the cancer cells to cytotoxic drugs, survivin antisense oligonucleotides were used to knock down any survivin mRNA, and, thus, its possibility to be translated is also eliminated. siRNAs for survivin are complements in sequence to the mRNA sequence encoding survivin. When these siRNAs for survivin are introduced into cells, they will bind to the respective complementary mRNA and, thus, prevent its translation since the mRNA is now impeded from proper physical interaction with the translational machinery. In this way, the siRNAs for survivin effectively downregulates survivin expression level in the cell. Cells treated with antisense oligonucleotides for survivin showed similar sensitization to cytotoxic drugs as cells treated with resveratrol, which offers support for the mechanism of action of resveratrol.
Prostate cancer
It has been observed that the development of hormone resistance in prostate cancer may be due to the upregulation of antiapoptotic genes, one of which is survivin.
Zhang et al. hypothesize that, if survivin is a significant contributor to the development of hormonal therapy resistance in prostate cancer cells, targeting survivin and blocking it would enhance prostate cancer cell susceptibility to anti-androgen therapy. (Anti-androgen therapy uses drugs to eliminate the presence of androgens in the cell and cellular environment, since such androgens are known to enhance tumour immortality in prostate cancer cells.) Zhang et al. first assessed the level of survivin expression of LNCaP (an androgen-dependent prostate cancer cell line that expresses intact androgen receptors) using quantitative Western analysis and found high expression of survivin in these cells. Cells exposed to dihydrotestosterone (DHT) showed increased levels of survivin expression only and not other IAP family members. This result suggests that androgens may upregulate survivin, which contributes to the resistance to apoptosis observed in the tumour cells. Next, with the addition of flutamide (an antiandrogen) to the cells, survivin levels were observed to significantly decrease. The LNCaP cells were transduced separately with the different constructs of the survivin gene (mutant or wild-type) and subjected to flutamide treatment and assessed for the apoptosis level. Flutamide-treated survivin mutant-transduced cells were shown to significantly increase apoptosis by double that of flutamide treatment alone. On the other end, overexpression of the wild-type survivin was found to significantly reduce the apoptosis levels from flutamide treatment compared to flutamide treatment alone. Therefore, these results support the hypothesis that survivin plays a role in the anti-apoptotic nature of the LNCaP cancer cell line and that inhibiting survivin in prostate cancer cells appears to enhance the therapeutic effect of flutamide.
Interactions
Survivin has been shown to interact with:
Aurora B kinase,
CDCA8,
Caspase 3,
Caspase 7,
Diablo homolog and
INCENP.
References
Further reading
Oncology
Gene expression | Survivin | [
"Chemistry",
"Biology"
] | 9,284 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
8,314,468 | https://en.wikipedia.org/wiki/Bradbury%E2%80%93Nielsen%20shutter | A Bradbury–Nielsen shutter (or Bradbury–Nielsen gate) is a type of electrical ion gate, which was first proposed in an article by Norris Bradbury and Russel A. Nielsen, where they used it as an electron filter. Today they are used in the field of mass spectrometry where they are used in both TOF mass spectrometers and in ion mobility spectrometers
, as well as Hadamard transform mass spectrometers (a variant of TOF-MS). The Bradbury–Nielsen shutter is ideal for injecting short pulses of ions and can be used to improve the mass resolution of TOF instruments by reducing the initial pulse size as compared to other methods of ion injection.
Theory of operation
The concept behind the Bradbury–Nielsen shutter is to apply a high frequency voltage in a 180° out-of-phase manner to alternate wires in a grid which is orthogonal to the path of the ion beam. This results in charged particles only passing directly through the shutter at certain times in the voltage phase (φ=nπ/2), when the potential difference between the grid wires is zero. At other times the ion beam is deflected to some angle by the potential difference between the neighboring wires. This deflection is divergent with ions that pass through alternate slits being deflected in opposite directions. The maximum deflection angle can be calculated by
tan α = k Vp / V0
where α is the deflection angle, k is a deflection constant, Vp is the wire voltage (+Vp on one wire set and -Vp on the other), and V0 is the ion acceleration voltage in eV. The deflection constant k can be calculated by
k = π / 2ln[cot(πR/2d)]
where R is the wire radius and d is the wire spacing.
Micromachined ion gates
A Bradbury-Nielsen Gate micromachined from a silicon on insulator wafer has been reported.
References
See also
Ion mobility spectrometer
Time-of-flight
Mass spectrometry | Bradbury–Nielsen shutter | [
"Physics",
"Chemistry"
] | 431 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
8,315,449 | https://en.wikipedia.org/wiki/Stick%E2%80%93slip%20phenomenon | The stick–slip phenomenon, also known as the slip–stick phenomenon or simply stick–slip, is a type of motion exhibited by objects in contact sliding over one another. The motion of these objects is usually not perfectly smooth, but rather irregular, with brief accelerations (slips) interrupted by stops (sticks). Stick–slip motion is normally connected to friction, and may generate vibration (noise) or be associated with mechanical wear of the moving objects, and is thus often undesirable in mechanical devices. On the other hand, stick–slip motion can be useful in some situations, such as the movement of a bow across a string to create musical tones in a bowed string instrument.
Details
With stick–slip there is typically a jagged type of behavior for the friction force as a function of time as illustrated in the static kinetic friction figure. Initially there is relatively little movement and the force climbs until it reaches some critical value which is set by the multiplication of the static friction coefficient and the applied load—the retarding force here follows the standard ideas of friction from Amontons' laws. Once this force is exceeded movement starts at a much lower load which is determined by the kinetic friction coefficient which is almost always smaller than the static coefficient. At times the object moving can get 'stuck', with local rises in the force before it starts to move again. There are many causes of this depending upon the size scale, from atomic to processes involving millions of atoms.
Stick–slip can be modeled as a mass coupled by an elastic spring to a constant drive force (see the model sketch). The drive system V applies a constant force, loading spring R and increasing the pushing force against load M. This force increases until retarding force from the static friction coefficient between load and floor is exceeded. The load then starts sliding, and the friction coefficient decreases to the value corresponding to load times the dynamic friction. Since this frictional force will be lower than the static value, the load accelerates until the decompressing spring can no longer generate enough force to overcome dynamic friction, and the load stops moving. The pushing force due to the spring builds up again, and the cycle repeats.
Stick–slip may be caused by many different phenomena, depending on the types of surfaces in contact and also the scale; it occurs with everything from the sliding of atomic force microscope tips to large tribometers. For rough surfaces, it is known that asperities play a major role in friction. The bumping together of asperities on the surface creates momentary sticks. For dry surfaces with regular microscopic topography, the two surfaces may need to creep at high friction for certain distances (in order for bumps to move past one another), until a smoother, lower-friction contact is formed. On lubricated surfaces, the lubricating fluid may undergo transitions from a solid-like state to a liquid-like state at certain forces, causing a transition from sticking to slipping. On very smooth surfaces, stick–slip behavior may result from coupled phonons (at the interface between the substrate and the slider) that are pinned in an undulating potential well, sticking or slipping with thermal fluctuations. Stick–slip occurs on all types of materials and on enormously varying length scales. The frequency of slips depends on the force applied to the sliding load, with a higher force corresponding to a higher frequency of slip.
Examples
Stick–slip motion is ubiquitous in systems with sliding components, such as disk brakes, bearings, electric motors, wheels on roads or railways, and in mechanical joints. Stick–slip also has been observed in articular cartilage in mild loading and sliding conditions, which could result in abrasive wear of the cartilage. Many familiar sounds are caused by stick–slip motion, such as the squeal of chalk on a chalkboard, the squeak of basketball shoes on a basketball court, and the sound made by the spiny lobster.
Stick–slip motion is used to generate musical notes in bowed string instruments, the glass harp and the singing bowl.
Stick–slip can also be observed on the atomic scale using a friction force microscope. The behaviour of seismically active faults is also explained using a stick–slip model, with earthquakes being generated during the periods of rapid slip.
See also
References
External links
Simulation of stick-slip behaviour in a friction force microscope (movie)
Jianguo Wu, Ashlie Martini, "Atomic Stick-Slip," DOI: 10254/nanohub-r7771.1, 2009
Mechanical engineering
Friction | Stick–slip phenomenon | [
"Physics",
"Chemistry",
"Engineering"
] | 927 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Applied and interdisciplinary physics",
"Surface science",
"Mechanical engineering"
] |
2,636,319 | https://en.wikipedia.org/wiki/Fluxon | In physics, a fluxon is a quantum of electromagnetic flux. The term may have any of several related meanings.
Superconductivity
In the context of superconductivity, in type II superconductors fluxons (also known as Abrikosov vortices) can form when the applied field lies between and . The fluxon is a small whisker of normal phase surrounded by superconducting phase, and Supercurrents circulate around the normal core. The magnetic field through such a whisker and its neighborhood, which has size of the order of London penetration depth (~100 nm), is quantized because of the phase properties of the magnetic vector potential in quantum electrodynamics, see magnetic flux quantum for details.
In the context of long Superconductor-Insulator-Superconductor Josephson tunnel junctions, a fluxon (aka Josephson vortex) is made of circulating supercurrents and has no normal core in the tunneling barrier. Supercurrents circulate just around the mathematical center of a fluxon, which is situated with the (insulating) Josephson barrier. Again, the magnetic flux created by circulating supercurrents is equal to a magnetic flux quantum (or less, if the superconducting electrodes of the Josephson junction are thinner than ).
Magnetohydrodynamics modeling
In the context of numerical MHD modeling, a fluxon is a discretized magnetic field line, representing a finite amount of magnetic flux in a localized bundle in the model. Fluxon models are explicitly designed to preserve the topology of the magnetic field, overcoming numerical resistivity effects in Eulerian models.
References
External links
FLUX, a fluxon-based MHD simulator
Theoretical physics
Superconductivity
Josephson effect | Fluxon | [
"Physics",
"Materials_science",
"Engineering"
] | 373 | [
"Josephson effect",
"Physical quantities",
"Theoretical physics",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Theoretical physics stubs",
"Electrical resistance and conductance"
] |
2,636,884 | https://en.wikipedia.org/wiki/Source%20criticism | Source criticism (or information evaluation) is the process of evaluating an information source, i.e.: a document, a person, a speech, a fingerprint, a photo, an observation, or anything used in order to obtain knowledge. In relation to a given purpose, a given information source may be more or less valid, reliable or relevant. Broadly, "source criticism" is the interdisciplinary study of how information sources are evaluated for given tasks.
Meaning
Problems in translation: The Danish word kildekritik, like the Norwegian word kildekritikk and the Swedish word källkritik, derived from the German Quellenkritik and is closely associated with the German historian Leopold von Ranke (1795–1886). Historian wrote:
His [Ranke's] first work Geschichte der romanischen und germanischen Völker von 1494–1514 (History of the Latin and Teutonic Nations from 1494 to 1514) (1824) was a great success. It already showed some of the basic characteristics of his conception of Europe, and was of historiographical importance particularly because Ranke made an exemplary critical analysis of his sources in a separate volume, Zur Kritik neuerer Geschichtsschreiber (On the Critical Methods of Recent Historians). In this work he raised the method of textual criticism used in the late eighteenth century, particularly in classical philology to the standard method of scientific historical writing. (Hardtwig, 2001, p. 12739)
Historical theorist Chris Lorenz wrote:
The larger part of the nineteenth and twentieth centuries would be dominated by the research-oriented conception of historical method of the so-called Historical School in Germany, led by historians as Leopold Ranke and Berthold Niebuhr. Their conception of history, long been regarded as the beginning of modern, 'scientific' history, harked back to the 'narrow' conception of historical method, limiting the methodical character of history to source criticism. (Lorenz, 2001)
In the early 21st century, source criticism is a growing field in, among other fields, library and information science. In this context source criticism is studied from a broader perspective than just, for example, history, classical philology, or biblical studies (but there, too, it has more recently received new attention).
Principles
The following principles are from two Scandinavian textbooks on source criticism, written by the historians Olden-Jørgensen (1998) and Thurén (1997):
Human sources may be relics (e.g. a fingerprint) or narratives (e.g. a statement or a letter). Relics are more credible sources than narratives.
A given source may be forged or corrupted; strong indications of the originality of the source increases its reliability.
The closer a source is to the event which it purports to describe, the more one can trust it to give an accurate description of what really happened
A primary source is more reliable than a secondary source, which in turn is more reliable than a tertiary source and so on.
If a number of independent sources contain the same message, the credibility of the message is strongly increased.
The tendency of a source is its motivation for providing some kind of bias. Tendencies should be minimized or supplemented with opposite motivations.
If it can be demonstrated that the witness (or source) has no direct interest in creating bias, the credibility of the message is increased.
Two other principles are:
Knowledge of source criticism cannot substitute for subject knowledge:
"Because each source teaches you more and more about your subject, you will be able to judge with ever-increasing precision the usefulness and value of any prospective source. In other words, the more you know about the subject, the more precisely you can identify what you must still find out". (Bazerman, 1995, p. 304).
The reliability of a given source is relative to the questions put to it.
"The empirical case study showed that most people find it difficult to assess questions of cognitive authority and media credibility in a general sense, for example, by comparing the overall credibility of newspapers and the Internet. Thus these assessments tend to be situationally sensitive. Newspapers, television and the Internet were frequently used as sources of orienting information, but their credibility varied depending on the actual topic at hand" (Savolainen, 2007).
The following questions are often good ones to ask about any source according to the American Library Association (1994) and Engeldinger (1988):
How was the source located?
What type of source is it?
Who is the author and what are the qualifications of the author in regard to the topic that is discussed?
When was the information published?
In which country was it published?
What is the reputation of the publisher?
Does the source show a particular cultural or political bias?
For literary sources complementing criteria are:
Does the source contain a bibliography?
Has the material been reviewed by a group of peers, or has it been edited?
How does the article/book compare with similar articles/books?
Levels of generality
Some principles of source criticism are universal, other principles are specific for certain kinds of information sources.
There is today no consensus about the similarities and differences between source criticism in the natural science and humanities. Logical positivism claimed that all fields of knowledge were based on the same principles. Much of the criticism of logical positivism claimed that positivism is the basis of the sciences, whereas hermeneutics is the basis of the humanities. This was, for example, the position of Jürgen Habermas. A newer position, in accordance with, among others, Hans-Georg Gadamer and Thomas Kuhn, understands both science and humanities as determined by researchers' preunderstanding and paradigms. Hermeneutics is thus a universal theory. The difference is, however, that the sources of the humanities are themselves products of human interests and preunderstanding, whereas the sources of the natural sciences are not. Humanities are thus "doubly hermeneutic".
Natural scientists, however, are also using human products (such as scientific papers) which are products of preunderstanding (and can lead to, for example, academic fraud).
Contributing fields
Epistemology
Epistemological theories are the basic theories about how knowledge is obtained and are thus the most general theories about how to evaluate information sources.
Empiricism evaluates sources by considering the observations (or sensations) on which they are based. Sources without basis in experience are not seen as valid.
Rationalism provides low priority to sources based on observations. In order to be meaningful, observations must be explained by clear ideas or concepts. It is the logical structure and the well definedness that is in focus in evaluating information sources from the rationalist point of view.
Historicism evaluates information sources on the basis of their reflection of their sociocultural context and their theoretical development.
Pragmatism evaluate sources on the basis of how their values and usefulness to accomplish certain outcomes. Pragmatism is skeptical about claimed neutral information sources.
The evaluation of knowledge or information sources cannot be more certain than is the construction of knowledge. If one accepts the principle of fallibilism then one also has to accept that source criticism can never 100% verify knowledge claims. As discussed in the next section, source criticism is intimately linked to scientific methods.
The presence of fallacies of argument in sources is another kind of philosophical criterion for evaluating sources. Fallacies are presented by Walton (1998). Among the fallacies are the ad hominem fallacy (the use of personal attack to try to undermine or refute a person's argument) and the straw man fallacy (when one arguer misrepresents another's position to make it appear less plausible than it really is, in order more easily to criticize or refute it.)
Research methodology
Research methods are methods used to produce scholarly knowledge. The methods that are relevant for producing knowledge are also relevant for evaluating knowledge. An example of a book that turns methodology upside-down and uses it to evaluate produced knowledge is Katzer; Cook & Crouch (1998).
Science studies
Studies of quality evaluation processes such as peer review, book reviews and of the normative criteria used in evaluation of scientific and scholarly research. Another field is the study of scientific misconduct.
Harris (1979) provides a case study of how a famous experiment in psychology, Little Albert, has been distorted throughout the history of psychology, starting with the author (Watson) himself, general textbook authors, behavior therapists, and a prominent learning theorist. Harris proposes possible causes for these distortions and analyzes the Albert study as an example of myth making in the history of psychology. Studies of this kind may be regarded a special kind of reception history (how Watson's paper was received). It may also be regarded as a kind of critical history (opposed to ceremonial history of psychology, cf. Harris, 1980). Such studies are important for source criticism in revealing the bias introduced by referring to classical studies.
Textual criticism
Textual criticism (or broader: text philology) is a part of philology, which is not just devoted to the study of texts, but also to edit and produce "scientific editions", "scholarly editions", "standard editions", "historical editions", "reliable editions", "reliable texts", "text editions" or "critical editions", which are editions in which careful scholarship has been employed to ensure that the information contained within is as close to the author's/composer's original intentions as possible (and which allows the user to compare and judge changes in editions published under influence by the author/composer). The relation between these kinds of works and the concept "source criticism" is evident in Danish, where they may be termed "kildekritisk udgave" (directly translated "source critical edition").
In other words, it is assumed that most editions of a given works is filled with noise and errors provided by publishers, why it is important to produce "scholarly editions". The work provided by text philology is an important part of source criticism in the humanities.
Psychology
The study of eyewitness testimony is an important field of study used, among other purposes, to evaluate testimony in courts. The basics of eyewitness fallibility includes factors such as poor viewing conditions, brief exposure, and stress. More subtle factors, such as expectations, biases, and personal stereotypes can intervene to create erroneous reports. Loftus (1996) discuss all such factors and also shows that eyewitness memory is chronically inaccurate in surprising ways. An ingenious series of experiments reveals that memory can be radically altered by the way an eyewitness is questioned after the fact. New memories can be implanted and old ones unconsciously altered under interrogation.
Anderson (1978) and Anderson & Pichert (1977) reported an elegant experiment demonstrating how change in perspective affected people's ability to recall information that was unrecallable from another perspective.
In psychoanalysis the concept of defence mechanism is important and may be considered a contribution to the theory of source criticism because it explains psychological mechanisms, which distort the reliability of human information sources.
Library and information science (LIS)
In schools of library and information science (LIS), source criticism is taught as part of the growing field of information literacy.
Issues such as relevance, quality indicators for documents, kinds of documents and their qualities (e.g. scholarly editions) are studied in LIS and are relevant for source criticism. Bibliometrics is often used to find the most influential journal, authors, countries and institutions. Librarians study book reviews and their function in evaluating books.
In library and information science the checklist approach has often been used. A criticism of this approach is given by Meola (2004): "Chucking the checklist".
Libraries sometimes provide advice on how their users may evaluate sources.
The Library of Congress has a "Teaching with Primary Sources" (TPS) program.
Ethics
Source criticism is also about ethical behavior and culture. It is about a free press and an open society, including the protecting information sources from being persecuted (cf., Whistleblower).
In specific domains
Photos
Photos are often manipulated during wars and for political purposes. One well known example is Joseph Stalin's manipulation of a photograph from May 5, 1920, on which Stalin's predecessor Lenin held a speech for Soviet troops that Leon Trotsky attended. Stalin had later Trotsky retouched out of this photograph. (cf. King, 1997). A recent example is reported by Healy (2008) about North Korean leader Kim Jong Il.
Internet sources
Much interest in evaluating Internet sources (such as Wikipedia) is reflected in the scholarly literature of library and information science and in other fields. Mintz (2002) is an edited volume about this issue. Examples of literature examining Internet sources include Chesney (2006), Fritch & Cromwell (2001), Leth & Thurén (2000) and Wilkinson, Bennett, & Oliver (1997).
Archaeology and history
"In history, the term historical method was first introduced in a systematic way in the sixteenth century by Jean Bodin in his treatise of source criticism, Methodus ad facilem historiarium cognitionem (1566). Characteristically, Bodin's treatise intended to establish the ways by which reliable knowledge of the past could be established by checking sources against one another and by so assessing the reliability of the information conveyed by them, relating them to the interests involved." (Lorenz, 2001, p. 6870).
As written above, modern source criticism in history is closely associated with the German historian Leopold von Ranke (1795–1886), who influenced historical methods on both sides of the Atlantic Ocean, although in rather different ways. American history developed in a more empirist and antiphilosophical way (cf., Novick, 1988).
Two of the best-known rule books from the 19th century are Bernheim (1889) and Langlois & Seignobos (1898). These books provided a seven-step procedure (here quoted from Howell & Prevenier, 2001, p. 70–71):
If the sources all agree about an event, historians can consider the event proved.
However, majority does not rule; even if most sources relate events in one way, that version will not prevail unless it passes the test of critical textual analysis.
The source whose account can be confirmed by reference to outside authorities in some of its parts can be trusted in its entirety if it is impossible similarly to confirm the entire text.
When two sources disagree on a particular point, the historian will prefer the source with most "authority"—i.e. the source created by the expert or by the eyewitness.
Eyewitnesses are, in general, to be preferred, especially in circumstances where the ordinary observer could have accurately reported what transpired and, more specifically, when they deal with facts known by most contemporaries.
If two independently created sources agree on a matter, the reliability of each is measureably enhanced.
When two sources disagree (and there is no other means of evaluation), then historians take the source which seems to accord best with common sense.
Gudmundsson (2007, p. 38) wrote: "Source criticism should not totally dominate later courses. Other important perspectives, for example, philosophy of history/view of history, should not suffer by being neglected" (Translated by BH). This quote makes a distinction between source criticism on the one hand and historical philosophy on the other hand. However, different views of history and different specific theories about the field being studied may have important consequences for how sources are selected, interpreted and used. Feminist scholars may, for example, select sources made by women and may interpret sources from a feminist perspective. Epistemology should thus be considered a part of source criticism. It is in particular related to "tendency analysis".
In archaeology, radiocarbon dating is an important technique to establish the age of information sources. Methods of this kind were the ideal when history established itself as both a scientific discipline and as a profession based on "scientific" principles in the last part of the 1880s (although radiocarbon dating is a more recent example of such methods). The empiricist movement in history brought along both "source criticism" as a research method and also in many countries large scale publishing efforts to make valid editions of "source materials" such as important letters and official documents (e.g. as facsimiles or transcriptions).
Historiography and historical method include the study of the reliability of the sources used, in terms of, for example, authorship, credibility of the author, and the authenticity or corruption of the text.
Biblical studies
Source criticism, as the term is used in biblical criticism, refers to the attempt to establish the sources used by the author and/or redactor of the final text. The term "literary criticism" is occasionally used as a synonym.
Biblical source criticism originated in the 18th century with the work of Jean Astruc, who adapted the methods already developed for investigating the texts of classical antiquity (Homer's Iliad in particular) to his own investigation into the sources of the Book of Genesis. It was subsequently considerably developed by German scholars in what was known as "the higher criticism", a term no longer in widespread use. The ultimate aim of these scholars was to reconstruct the history of the biblical text, as well as the religious history of ancient Israel.
Related to source criticism is redaction criticism which seeks to determine how and why the redactor (editor) put the sources together the way he did. Also related is form criticism and tradition history which try to reconstruct the oral prehistory behind the identified written sources.
Journalism
Journalists often work with strong time pressure and have access to only a limited number of information sources such as news bureaus, persons which may be interviewed, newspapers, journals and so on (see journalism sourcing). Journalists' possibility for conducting serious source criticism is thus limited compared to, for example, historians' possibilities.
Legal studies
The most important legal sources are created by parliaments, governments, courts, and legal researchers. They may be written or informal and based on established practices. Views concerning the quality of sources differ among legal philosophies: Legal positivism is the view that the text of the law should be considered in isolation, while legal realism, interpretivism (legal), critical legal studies and feminist legal criticism interprets the law on a broader cultural basis.
See also
Argumentation theory
Bias
Critical thinking
Deception
Fabrication (science)
Exegesis
False document
Fraud
Plagiarism
Psychological warfare
Q source
Scholarly method
Notes
References
American Library Association (1994) Evaluating Information: A Basic Checklist. Brochure. American Library Association
Anderson, Richard C. (1978). Schema-directed processes in language comprehension. IN: NATO International Conference on Cognitive Psychology and Instruction, 1977, Amsterdam: Cognitive Psychology and Instruction. Ed. by A. M. Lesgold, J. W. Pellegrino, S. D. Fokkema & R. Glaser. New York: Plenum Press (pp. 67–82).
Anderson, Richard C. & Pichert, J. W. (1977). Recall of previously unrecallable information following a shift of perspective. Urbana, Il: University of Illinois, Center for the Study of Reading, April. 1977. (Technical Report 41). Available in full-text from: http://eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/31/83/58.pdf
Bazerman, Charles (1995). The Informed Writer: Using Sources in the Disciplines. 5th ed. Houghton Mifflin.
Bee, Ronald E. (1983). Statistics and Source Criticism. Vetus Testamentum, Volume 33, Number 4, 483–488.
Beecher-Monas, Erica (2007). Evaluating scientific evidence : an interdisciplinary framework for intellectual due process. Cambridge; New York: Cambridge University Press.
Bernheim, Ernst (1889). Lehrbuch der Historischen Methode und der Geschichtsphilosophie [Guidebook for Historical Method and the Philosophy of History]. Leipzig: Duncker & Humblot.
Brundage, Anthony (2007). Going to the Sources: A Guide to Historical Research and Writing, 4th Ed. Wheeling, Illinois: Harlan Davidson, Inc. (3rd edition, 1989 cited in text above).
Chesney, T. (2006). An empirical examination of Wikipedia's credibility. First Monday, 11(11), URL: http://firstmonday.org/issues/issue11_11/chesney/index.html
Encyclopædia Britannica (2006). Fatally Flawed. Refuting the recent study on encyclopedic accuracy by the journal Nature. http://corporate.britannica.com/britannica_nature_response.pdf Nature's response March 23, 2006: http://www.nature.com/press_releases/Britannica_response.pdf
Engeldinger, Eugene A. (1988) Bibliographic Instruction and Critical Thinking: The Contribution of the Annotated Bibliography. Research Quarterly, Vol. 28, Winter, p. 195–202
Engeldinger, Eugene A. (1998) Technology Infrastructure and Information Literacy. Library Philosophy and Practice Vol. 1, No. 1
Fritch, J. W., & Cromwell, R. L. (2001). Evaluating Internet resources: Identity, affiliation, and cognitive authority in a networked world. Journal of the American Society for Information Science and Technology, 52, 499–507.
Gerhart, Susan L. (2004). Do Web search engines suppress controversy?. First Monday 9(1).
Gudmundsson, David (2007). När kritiska elever är målet. Att undervisa i källkritik på gymnasiet. [When the Goal is Critical Students. Teaching Source Criticism in Upper Secondary School]. Malmö, Sweden: Malmö högskola. Full text
Hardtwig, W. (2001). Ranke, Leopold von (1795–1886). IN: Smelser, N. J. & Baltes, P. B. (eds.) International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Elsevier. (12738–12741).
Harris, Ben (1979). Whatever Happened to Little Albert? American Psychologist, 34, 2, pp. 151–160. link to full text
Harris, Ben (1980). Ceremonial versus critical history of psychology. American Psychologist, 35(2), 218–219. (Note).
Healy, Jack (2008). Was the Dear Leader Photoshopped In? November 7, 2008, 2:57 pm [President Kim Jong Il, North Korea]. http://thelede.blogs.nytimes.com/2008/11/07/was-the-dear-leader-photoshopped-in/?scp=7&sq=Kim%20Jong-il&st=cse
Hjørland, Birger (2008). Source criticism. In: Epistemological Lifeboat. Ed. by Birger Hjørland & Jeppe Nicolaisen.
Howell, Martha & Prevenier, Walter (2001). From Reliable Sources: An Introduction to Historical Methods. Ithaca: Cornell University Press. .
Katzer, Jeffrey; Cook, Kenneth H. & Crouch, Wayne W. (1998). Evaluating Information: A Guide for Users of Social Science Research. 4th ed. Boston, MA: McGraw-Hill.
King, David (1997) The Commissar Vanishes: the falsification of photographs and art in Stalin's Russia. Metropolitan Books, New York.
Langlois, Charles-Victor & Seignobos, Charles (1898). Introduction aux études historiques [Introduction to the Study of History]. Paris: Librairie Hachette. Full text . Introduction to the Study of History Full text
Leth, Göran & Thurén, Torsten (2000). Källkritik för internet . Stockholm: Styrelsen för Psykologiskt Försvar. (Retrieved 2007-11-30).
Loftus, Elizabeth F. (1996). Eyewitness Testimony. Revised edition Cambridge, MA: Harward University Press. (Original edition:1979).
Lorenz, C. (2001). History: Theories and Methods. IN: Smelser, N. J. & Baltes, P. B. (eds.) International Encyclopedia of the Social and Behavioral Sciences. Amsterdam: Elsevier. (Pp. 6869–6876).
Mathewson, Daniel B. (2002). A critical binarism: Source criticism and deconstructive criticism. Journal for the Study of the Old Testament no98, pp. 3–28. Abstract: When classifying the array of interpretive methods currently available, biblical critics regularly distinguish between historical-critical methods, on the one hand, and literary critical methods, on the other. Frequently, methods on one side of the divide are said to be antagonistic to certain methods on the other. This article examines two such presumed antagonistic methods, source criticism and deconstructive criticism, and argues that they are not, in fact, antagonistic, but similar: both are postmodern movements, and both share an interpretive methodology (insofar as it is correct to speak of a deconstructive methodology). This argument is illustrated with a source-critical and a deconstructive reading of Exodus 14.
Mattus, Maria (2007). Finding Credible Information: A Challenge to Students Writing Academic Essays. Human IT 9(2), 1–28. Retrieved 2007-09-04 from:
Mintz, Anne P. (ed.). (2002). Web of deception. Misinformation on the Internet. Medford, NJ: Information Today.
Müller, Philipp (2009). Understanding history: Hermeneutics and source-criticism in historical scholarship. IN: Dobson, Miriam & Ziemann, Benjamin (eds): Reading primary sources. The interpretation of texts from nineteenth and twentieth-century history. London: Routledge (pp. 21–36).
Olden-Jørgensen, Sebastian (2001). Til Kilderne: Introduktion til Historisk Kildekritik (in Danish). [To the sources: Introduction to historical source criticism]. København: Gads Forlag. .
Reinfandt, Christohp (2009). Reading texts after the linguistic turn: approaches from literary studies and their implementation. IN: Dobson, Miriam & Ziemann, Benjamin (eds): Reading primary sources. The interpretation of texts from nineteenth and twentieth-century history. London: Routledge (pp. 37–54).
Rieh, S. Y. (2002). Judgment of information quality and cognitive authority in the Web. Journal of the American Society for Information Science and Technology, 53(2), 145–161. https://web.archive.org/web/20090731152623/http://www.si.umich.edu/rieh/papers/rieh_jasist2002.pdf
Rieh, S. Y. (2005). Cognitive authority. I: K. E. Fisher, S. Erdelez, & E. F. McKechnie (Eds.), Theories of information behavior: A researchers' guide . Medford, NJ: Information Today (pp. 83–87). https://web.archive.org/web/20080512170752/http://newweb2.si.umich.edu/rieh/papers/rieh_IBTheory.pdf
Rieh, Soo Young & Danielson, David R. (2007). Credibility: A multidisciplinary framework. Annual Review of Information Science and Technology, 41, 307–364.
Riegelman, Richard K. (2004). Studying a Study and Testing a Test: How to Read the Medical Evidence. 5th ed. Philadelphia, PA: Lippincott Williams & Wilkins.
Savolainen, R. (2007). Media credibility and cognitive authority. The case of seeking orienting information. Information Research, 12(3) paper 319. Available at https://web.archive.org/web/20180416064908/http://www.informationr.net/ir///12-3/paper319.html
Slife, Brent D. & Williams, R. N. (1995). What's behind the research? Discovering hidden assumptions in the behavioral sciences. Thousand Oaks, CA: Sage Publications. ("A Consumers Guide to the Behavioral Sciences").
Taylor, John (1991). War photography; realism in the British press. London : Routledge.
Thurén, Torsten. (1997). Källkritik. Stockholm: Almqvist & Wiksell.
Walton, Douglas (1998). Fallacies. IN: Routledge Encyclopedia of Philosophy, Version 1.0, London: Routledge
Webb, E J; Campbell, D T; Schwartz, R D & Sechrest, L (2000). Unobtrusive measures; revised edition. Sage Publications Inc.
Wilkinson, G.L., Bennett, L.T., & Oliver, K.M. (1997). Evaluation criteria and indicators of quality for Internet resources. Educational Technology, 37(3), 52–59.
Wilson, Patrick (1983). Second-Hand Knowledge. An Inquiry into Cognitive Authority. Westport, Conn.: Greenwood.
External links
The Source Compass: Source Criticism
The History Sourcebook: The Need for Source Criticism
Error
Library science
Literary criticism
Scientific method
Scientific misconduct
Skepticism
Sources
Information science | Source criticism | [
"Technology"
] | 6,244 | [
"Scientific misconduct",
"Ethics of science and technology"
] |
2,640,459 | https://en.wikipedia.org/wiki/Hermitian%20manifold | In mathematics, and more specifically in differential geometry, a Hermitian manifold is the complex analogue of a Riemannian manifold. More precisely, a Hermitian manifold is a complex manifold with a smoothly varying Hermitian inner product on each (holomorphic) tangent space. One can also define a Hermitian manifold as a real manifold with a Riemannian metric that preserves a complex structure.
A complex structure is essentially an almost complex structure with an integrability condition, and this condition yields a unitary structure (U(n) structure) on the manifold. By dropping this condition, we get an almost Hermitian manifold.
On any almost Hermitian manifold, we can introduce a fundamental 2-form (or cosymplectic structure) that depends only on the chosen metric and the almost complex structure. This form is always non-degenerate. With the extra integrability condition that it is closed (i.e., it is a symplectic form), we get an almost Kähler structure. If both the almost complex structure and the fundamental form are integrable, then we have a Kähler structure.
Formal definition
A Hermitian metric on a complex vector bundle over a smooth manifold is a smoothly varying positive-definite Hermitian form on each fiber. Such a metric can be viewed as a smooth global section of the vector bundle such that for every point in ,
for all , in the fiber and
for all nonzero in .
A Hermitian manifold is a complex manifold with a Hermitian metric on its holomorphic tangent bundle. Likewise, an almost Hermitian manifold is an almost complex manifold with a Hermitian metric on its holomorphic tangent bundle.
On a Hermitian manifold the metric can be written in local holomorphic coordinates as
where are the components of a positive-definite Hermitian matrix.
Riemannian metric and associated form
A Hermitian metric h on an (almost) complex manifold M defines a Riemannian metric g on the underlying smooth manifold. The metric g is defined to be the real part of h:
The form g is a symmetric bilinear form on TMC, the complexified tangent bundle. Since g is equal to its conjugate it is the complexification of a real form on TM. The symmetry and positive-definiteness of g on TM follow from the corresponding properties of h. In local holomorphic coordinates the metric g can be written
One can also associate to h a complex differential form ω of degree (1,1). The form ω is defined as minus the imaginary part of h:
Again since ω is equal to its conjugate it is the complexification of a real form on TM. The form ω is called variously the associated (1,1) form, the fundamental form, or the Hermitian form. In local holomorphic coordinates ω can be written
It is clear from the coordinate representations that any one of the three forms , , and uniquely determine the other two. The Riemannian metric and associated (1,1) form are related by the almost complex structure as follows
for all complex tangent vectors and . The Hermitian metric can be recovered from and via the identity
All three forms h, g, and ω preserve the almost complex structure . That is,
for all complex tangent vectors and .
A Hermitian structure on an (almost) complex manifold can therefore be specified by either
a Hermitian metric as above,
a Riemannian metric that preserves the almost complex structure , or
a nondegenerate 2-form which preserves and is positive-definite in the sense that for all nonzero real tangent vectors .
Note that many authors call itself the Hermitian metric.
Properties
Every (almost) complex manifold admits a Hermitian metric. This follows directly from the analogous statement for Riemannian metric. Given an arbitrary Riemannian metric g on an almost complex manifold M one can construct a new metric g′ compatible with the almost complex structure J in an obvious manner:
Choosing a Hermitian metric on an almost complex manifold M is equivalent to a choice of U(n)-structure on M; that is, a reduction of the structure group of the frame bundle of M from GL(n, C) to the unitary group U(n). A unitary frame on an almost Hermitian manifold is complex linear frame which is orthonormal with respect to the Hermitian metric. The unitary frame bundle of M is the principal U(n)-bundle of all unitary frames.
Every almost Hermitian manifold M has a canonical volume form which is just the Riemannian volume form determined by g. This form is given in terms of the associated (1,1)-form by
where is the wedge product of with itself times. The volume form is therefore a real (n,n)-form on M. In local holomorphic coordinates the volume form is given by
One can also consider a hermitian metric on a holomorphic vector bundle.
Kähler manifolds
The most important class of Hermitian manifolds are Kähler manifolds. These are Hermitian manifolds for which the Hermitian form is closed:
In this case the form ω is called a Kähler form. A Kähler form is a symplectic form, and so Kähler manifolds are naturally symplectic manifolds.
An almost Hermitian manifold whose associated (1,1)-form is closed is naturally called an almost Kähler manifold. Any symplectic manifold admits a compatible almost complex structure making it into an almost Kähler manifold.
Integrability
A Kähler manifold is an almost Hermitian manifold satisfying an integrability condition. This can be stated in several equivalent ways.
Let be an almost Hermitian manifold of real dimension and let be the Levi-Civita connection of . The following are equivalent conditions for to be Kähler:
is closed and is integrable,
,
,
the holonomy group of is contained in the unitary group associated to ,
The equivalence of these conditions corresponds to the "2 out of 3" property of the unitary group.
In particular, if is a Hermitian manifold, the condition dω = 0 is equivalent to the apparently much stronger conditions . The richness of Kähler theory is due in part to these properties.
References
Complex manifolds
Differential geometry
Riemannian geometry
Riemannian manifolds
Structures on manifolds | Hermitian manifold | [
"Mathematics"
] | 1,302 | [
"Riemannian manifolds",
"Space (mathematics)",
"Metric spaces"
] |
2,640,809 | https://en.wikipedia.org/wiki/Thorne%E2%80%93Hawking%E2%80%93Preskill%20bet | The Thorne–Hawking–Preskill bet was a public bet on the outcome of the black hole information paradox made in 1997 by physics theorists Kip Thorne and Stephen Hawking on the one side, and John Preskill on the other, according to the document they signed 6 February 1997, as shown in Hawking's 2001 book The Universe in a Nutshell.
Overview
Thorne and Hawking argued that since general relativity made it impossible for black holes to radiate, and lose information, the mass-energy and information carried by Hawking radiation must be "new", and must not originate from inside the black hole event horizon. Since this contradicted the idea under quantum mechanics of microcausality, quantum mechanics would need to be rewritten. Preskill argued the opposite, that since quantum mechanics suggests that the information emitted by a black hole relates to information that fell in at an earlier time, the view of black holes given by general relativity must be modified in some way. The winning side of the bet would receive an encyclopedia of their choice, "from which information can be retrieved at will".
In 2004, Hawking announced that he was conceding the bet, and that he now believed that black hole horizons should fluctuate and leak information, in doing so providing Preskill with a copy of Total Baseball, The Ultimate Baseball Encyclopedia. Comparing the useless information obtainable from a black hole to "burning an encyclopedia", Hawking later joked, "I gave John an encyclopedia of baseball, but maybe I should just have given him the ashes." Thorne, however, remained unconvinced of Hawking's proof and declined to contribute to the award. Hawking's argument that he solved the paradox has not yet been wholly accepted by the scientific community, and a consensus has not yet been reached that Hawking provided a strong enough argument that this is in fact what happens.
Hawking had earlier speculated that the singularity at the centre of a black hole could form a bridge to a "baby universe", into which the lost information could pass; such theories have been very popular in science fiction. But according to Hawking's new idea, presented at the 17th International Conference on General Relativity and Gravitation, on 21 July 2004 in Dublin, black holes eventually transmit, in a garbled form, information about all matter they swallow:
Earlier Thorne–Hawking bet
An older bet from 1974 – about the existence of black holes – was described by Hawking as an "insurance policy" of sorts:
In the updated and expanded edition of A Brief History of Time, Hawking states, "Although the situation with Cygnus X-1 has not changed much since we made the bet in 1975, there is now so much other observational evidence in favour of black holes that I have conceded the bet. I paid the specified penalty, which was a one year subscription to Penthouse, to the outrage of Kip's liberated wife."
While Hawking described the bet as having been made in 1975, the written bet itself—in Thorne's handwriting, with his and Hawking's signatures—bears witness signatures under the legend "Witnessed this tenth day of December 1974". Thorne confirmed this date on the 10 January 2018 episode of Nova on PBS.
See also
Hawking radiation
Scientific wager
References
Black holes
Stephen Hawking
Wagers
1997 in science
February 1997 | Thorne–Hawking–Preskill bet | [
"Physics",
"Astronomy"
] | 692 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
22,431,652 | https://en.wikipedia.org/wiki/FASTQ%20format | FASTQ format is a text-based format for storing both a biological sequence (usually nucleotide sequence) and its corresponding quality scores. Both the sequence letter and quality score are each encoded with a single ASCII character for brevity.
It was originally developed at the Wellcome Trust Sanger Institute to bundle a FASTA formatted sequence and its quality data, but has become the de facto standard for storing the output of high-throughput sequencing instruments such as the Illumina Genome Analyzer.
Format
A FASTQ file has four line-separated fields per sequence:
Field 1 begins with a '@' character and is followed by a sequence identifier and an optional description (like a FASTA title line).
Field 2 is the raw sequence letters.
Field 3 begins with a '+' character and is optionally followed by the same sequence identifier (and any description) again.
Field 4 encodes the quality values for the sequence in Field 2, and must contain the same number of symbols as letters in the sequence.
A FASTQ file containing a single sequence might look like this:
@SEQ_ID
GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
+
!''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65
The byte representing quality runs from 0x21 (lowest quality; '!' in ASCII) to 0x7e (highest quality; '~' in ASCII).
Here are the quality value characters in left-to-right increasing order of quality (ASCII):
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
The original Sanger FASTQ files split long sequences and quality strings over multiple lines, as is typically done for FASTA files. Accounting for this makes parsing more complicated due to the choice of "@" and "+" as markers (as these characters can also occur in the quality string). Multi-line FASTQ files (and consequently multi-line FASTQ parsers) are less common now that the majority of sequencing carried out is short-read Illumina sequencing, with typical sequence lengths of around 100bp.
Illumina sequence identifiers
Sequences from the Illumina software use a systematic identifier:
Versions of the Illumina pipeline since 1.4 appear to use #NNNNNN instead of #0 for the multiplex ID, where NNNNNN is the sequence of the multiplex tag.
With Casava 1.8 the format of the '@' line has changed:
Note that more recent versions of Illumina software output a sample number (defined by the order of the samples in the sample sheet) in place of an index sequence when an index sequence is not explicitly specified for a sample in the sample sheet. For example, the following header might appear in a FASTQ file belonging to the first sample of a batch of samples:
NCBI Sequence Read Archive
FASTQ files from the INSDC Sequence Read Archive often include a description, e.g.
In this example there is an NCBI-assigned identifier, and the description holds the original identifier from Solexa/Illumina (as described above) plus the read length. Sequencing was performed in paired-end mode (~500bp insert size), see SRR001666. The default output format of fastq-dump produces entire spots, containing any technical reads and typically single or paired-end biological reads.
$ fastq-dump.2.9.0 -Z -X 2 SRR001666
Read 2 spots for SRR001666
Written 2 spots for SRR001666
@SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=72
GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACCAAGTTACCCTTAACAACTTAAGGGTTTTCAAATAGA
+SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=72
IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9ICIIIIIIIIIIIIIIIIIIIIDIIIIIII>IIIIII/
@SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=72
GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGAAGCAGAAGTCGATGATAATACGCGTCGTTTTATCAT
+SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=72
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBIIIIIIIIIIIIIIIIIIIIIIIGII>IIIII-I)8I
Modern usage of FASTQ almost always involves splitting the spot into its biological reads, as described in submitter-provided metadata:
$ fastq-dump -X 2 SRR001666 --split-3
Read 2 spots for SRR001666
Written 2 spots for SRR001666
$ head SRR001666_1.fastq SRR001666_2.fastq
==> SRR001666_1.fastq <==
@SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=36
GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACC
+SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=36
IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9IC
@SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=36
GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGA
+SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=36
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBI
==> SRR001666_2.fastq <==
@SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=36
AAGTTACCCTTAACAACTTAAGGGTTTTCAAATAGA
+SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=36
IIIIIIIIIIIIIIIIIIIIDIIIIIII>IIIIII/
@SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=36
AGCAGAAGTCGATGATAATACGCGTCGTTTTATCAT
+SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=36
IIIIIIIIIIIIIIIIIIIIIIGII>IIIII-I)8I
When present in the archive, fastq-dump can attempt to restore read names to original format. NCBI does not store original read names by default:
$ fastq-dump -X 2 SRR001666 --split-3 --origfmt
Read 2 spots for SRR001666
Written 2 spots for SRR001666
$ head SRR001666_1.fastq SRR001666_2.fastq
==> SRR001666_1.fastq <==
@071112_SLXA-EAS1_s_7:5:1:817:345
GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACC
+071112_SLXA-EAS1_s_7:5:1:817:345
IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9IC
@071112_SLXA-EAS1_s_7:5:1:801:338
GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGA
+071112_SLXA-EAS1_s_7:5:1:801:338
IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBI
==> SRR001666_2.fastq <==
@071112_SLXA-EAS1_s_7:5:1:817:345
AAGTTACCCTTAACAACTTAAGGGTTTTCAAATAGA
+071112_SLXA-EAS1_s_7:5:1:817:345
IIIIIIIIIIIIIIIIIIIIDIIIIIII>IIIIII/
@071112_SLXA-EAS1_s_7:5:1:801:338
AGCAGAAGTCGATGATAATACGCGTCGTTTTATCAT
+071112_SLXA-EAS1_s_7:5:1:801:338
IIIIIIIIIIIIIIIIIIIIIIGII>IIIII-I)8I
In the example above, the original read names were used rather than the accessioned read name. NCBI accessions runs and the reads they contain. Original read names, assigned by sequencers, are able to function as locally unique identifiers of a read, and convey exactly as much information as a serial number. The ids above were algorithmically assigned based upon run information and geometric coordinates. Early SRA loaders parsed these ids and stored their decomposed components internally. NCBI stopped recording read names because they are frequently modified from the vendors' original format in order to associate some additional information meaningful to a particular processing pipeline, and this caused name format violations that resulted in a high number of rejected submissions. Without a clear schema for read names, their function remains that of a unique read id, conveying the same amount of information as a read serial number. See various SRA Toolkit issues for details and discussions.
Also note that fastq-dump converts this FASTQ data from the original Solexa/Illumina encoding to the Sanger standard (see encodings below). This is because the SRA serves as a repository for NGS information, rather than format. The various *-dump tools are capable of producing data in several formats from the same source. The requirements for doing so have been dictated by users over several years, with the majority of early demand coming from the 1000 Genomes Project.
Variations
Quality
A quality value Q is an integer mapping of p (i.e., the probability that the corresponding base call is incorrect). Two different equations have been in use. The first is the standard Sanger variant to assess reliability of a base call, otherwise known as Phred quality score:
The Solexa pipeline (i.e., the software delivered with the Illumina Genome Analyzer) earlier used a different mapping, encoding the odds p/(1-p) instead of the probability p:
Although both mappings are asymptotically identical at higher quality values, they differ at lower quality levels (i.e., approximately p > 0.05, or equivalently, Q < 13).
At times there has been disagreement about which mapping Illumina actually uses. The user guide (Appendix B, page 122) for version 1.4 of the Illumina pipeline states that: "The scores are defined as , where is the probability of a base call corresponding to the base in question". In retrospect, this entry in the manual appears to have been an error. The user guide (What's New, page 5) for version 1.5 of the Illumina pipeline lists this description instead: "Important Changes in Pipeline v1.3 . The quality scoring scheme has changed to the Phred [i.e., Sanger] scoring scheme, encoded as an ASCII character by adding 64 to the Phred value. A Phred score of a base is: , where e is the estimated probability of a base being wrong.
Encoding
Sanger format can encode a Phred quality score from 0 to 93 using ASCII 33 to 126 (although in raw read data the Phred quality score rarely exceeds 60, higher scores are possible in assemblies or read maps). Also used in SAM format. Coming to the end of February 2011, Illumina's newest version (1.8) of their pipeline CASAVA will directly produce fastq in Sanger format, according to the announcement on seqanswers.com forum.
Element Biosciences AVITI reads are encoded following the Sanger convention: Phred quality scores from 0 to 93 are encoded using ASCII 33 to 126. Raw reads typically exhibit base quality scores in the range of [0, 55].
PacBio HiFi reads, which are typically stored in SAM/BAM format, use the Sanger convention: Phred quality scores from 0 to 93 are encoded using ASCII 33 to 126. Raw PacBio subreads use the same convention but typically assign a placeholder base quality (Q0) to all bases in the read.
Oxford Nanopore Duplex reads, called using the dorado basecaller are typically stored in SAM/BAM format. After changing to a 16-bit internal quality representation, the reported base quality limit is q50 (S).
Solexa/Illumina 1.0 format can encode a Solexa/Illumina quality score from -5 to 62 using ASCII 59 to 126 (although in raw read data Solexa scores from -5 to 40 only are expected)
Starting with Illumina 1.3 and before Illumina 1.8, the format encoded a Phred quality score from 0 to 62 using ASCII 64 to 126 (although in raw read data Phred scores from 0 to 40 only are expected).
Starting in Illumina 1.5 and before Illumina 1.8, the Phred scores 0 to 2 have a slightly different meaning. The values 0 and 1 are no longer used and the value 2, encoded by ASCII 66 "B", is used also at the end of reads as a Read Segment Quality Control Indicator. The Illumina manual (page 30) states the following: If a read ends with a segment of mostly low quality (Q15 or below), then all of the quality values in the segment are replaced with a value of 2 (encoded as the letter B in Illumina's text-based encoding of quality scores)... This Q2 indicator does not predict a specific error rate, but rather indicates that a specific final portion of the read should not be used in further analyses. Also, the quality score encoded as "B" letter may occur internally within reads at least as late as pipeline version 1.6, as shown in the following example:
@HWI-EAS209_0006_FC706VJ:5:58:5894:21141#ATCACG/1
TTAATTGGTAAATAAATCTCCTAATAGCTTAGATNTTACCTTNNNNNNNNNNTAGTTTCTTGAGATTTGTTGGGGGAGACATTTTTGTGATTGCCTTGAT
+HWI-EAS209_0006_FC706VJ:5:58:5894:21141#ATCACG/1
efcfffffcfeefffcffffffddf`feed]`]_Ba_^__[YBBBBBBBBBBRTT\]][]dddd`ddd^dddadd^BBBBBBBBBBBBBBBBBBBBBBBB
An alternative interpretation of this ASCII encoding has been proposed. Also, in Illumina runs using PhiX controls, the character 'B' was observed to represent an "unknown quality score". The error rate of 'B' reads was roughly 3 phred scores lower the mean observed score of a given run.
Starting in Illumina 1.8, the quality scores have basically returned to the use of the Sanger format (Phred+33).
For raw reads, the range of scores will depend on the technology and the base caller used, but will typically be up to 41 for recent Illumina chemistry. Since the maximum observed quality score was previously only 40, various scripts and tools break when they encounter data with quality values larger than 40. For processed reads, scores may be even higher. For example, quality values of 45 are observed in reads from Illumina's Long Read Sequencing Service (previously Moleculo).
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS.....................................................
..........................XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX......................
...............................IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII......................
.................................JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ.....................
LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL....................................................
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN...........................................
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
| | | | | | |
33 59 64 73 88 104 126
0........................26...31.......40
-5....0........9.............................40
0........9.............................40
3.....9..............................41
0.2......................26...31........41
0..................20........30........40........50
0..................20........30........40........50...55
0..................20........30........40........50..........................................93
S - Sanger Phred+33, raw reads typically (0, 40)
X - Solexa Solexa+64, raw reads typically (-5, 40)
I - Illumina 1.3+ Phred+64, raw reads typically (0, 40)
J - Illumina 1.5+ Phred+64, raw reads typically (3, 41)
with 0=unused, 1=unused, 2=Read Segment Quality Control Indicator (bold)
(Note: See discussion above).
L - Illumina 1.8+ Phred+33, raw reads typically (0, 41)
N - Nanopore Phred+33, Duplex reads typically (0, 50)
E - ElemBio AVITI Phred+33, raw reads typically (0, 55)
P - PacBio Phred+33, HiFi reads typically (0, 93)
Color space
For SOLiD data, the format is modified to a color space FASTQ sequence (CSFASTQ), where bases in the sequence are combined with the numbers 0, 1, 2, and 3, indicating how bases are modified relative to the previous base in the sequence (0: no change; 1: transition; 2: non-complementary transversion; 3: complementary transversion). This format matched the different sequencing chemistry used by SOLiD sequencers. Initial representations only used nucleotide bases at the start of the sequence, but later versions included bases embedded at periodic intervals to improve basecalling and mapping accuracy.
The quality values for CSFASTQ are identical to those of the Sanger format. Alignment tools differ in their preferred version of the quality values: some include a quality score (set to 0, i.e. '!') for the leading nucleotide, others do not. The sequence read archive includes this quality score.
FAST5 and HDF5 evolutions
The FAST4 format was invented as a derivative of the FASTQ format where each of the 4 bases (A,C,G,T) had separate probabilities stored. It was part of the Swift basecaller, an open source package for primary data analysis on next-gen sequence data "from images to basecalls".
The FAST5 format was invented as an extension of the FAST4 format. The FAST5 files are Hierarchical Data Format 5 (HDF5) files with a specific schema defined by Oxford Nanopore Technologies (ONT).
Simulation
FASTQ read simulation has been approached by several tools.
A comparison of those tools can be seen here.
Compression
General compressors
General-purpose tools such as Gzip and bzip2 regard FASTQ as a plain text file and result in suboptimal compression ratios. NCBI's Sequence Read Archive encodes metadata using the LZ-77 scheme.
General FASTQ compressors typically compress distinct fields (read names, sequences, comments, and quality scores) in a FASTQ file separately; these include DSRC and DSRC2, FQC, LFQC, Fqzcomp, and Slimfastq.
Reads
Having a reference genome around is convenient because then instead of storing the nucleotide sequences themselves, one can just align the reads to the reference genome and store the positions (pointers) and mismatches; the pointers can then be sorted according to their order in the reference sequence and encoded, e.g., with run-length encoding. When the coverage or the repeat content of the sequenced genome is high, this leads to a high compression ratio.
Unlike the SAM/BAM formats, FASTQ files do not specify a reference genome. Alignment-based FASTQ compressors supports the use of either user-provided or de novo assembled reference: LW-FQZip uses a provided reference genome and Quip, Leon, k-Path and KIC perform de novo assembly using a de Bruijn graph-based approach.
Explicit read mapping and de novo assembly are typically slow. Reordering-based FASTQ compressors first cluster reads that share long substrings and then independently compress reads in each cluster after reordering them or assembling them into longer contigs, achieving perhaps the best trade-off between the running time and compression rate. SCALCE is the first such tool, followed by Orcom and Mince. BEETL uses a generalized Burrows–Wheeler transform for reordering reads, and HARC achieves better performance with hash-based reordering. AssemblTrie instead assembles reads into reference trees with as few total number of symbols as possible in the reference.
Benchmarks for these tools are available in.
Quality values
Quality values account for about half of the required disk space in the FASTQ format (before compression), and therefore the compression of the quality values can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Both lossless and lossy compression are recently being considered in the literature. For example, the algorithm QualComp performs lossy compression with a rate (number of bits per quality value) specified by the user. Based on rate-distortion theory results, it allocates the number of bits so as to minimize the MSE (mean squared error) between the original (uncompressed) and the reconstructed (after compression) quality values. Other algorithms for compression of quality values include SCALCE and Fastqz. Both are lossless compression algorithms that provide an optional controlled lossy transformation approach. For example, SCALCE reduces the alphabet size based on the observation that “neighboring” quality values are similar in general. For a benchmark, see.
As of the HiSeq 2500 Illumina gives the option to output qualities that have been coarse grained into quality bins. The binned scores are computed directly from the empirical quality score table, which is itself tied to the hardware, software and chemistry that were used during the sequencing experiment.
File extension
There is no standard file extension for a FASTQ file, but .fq and .fastq are commonly used.
Format converters
Biopython version 1.51 onwards (interconverts Sanger, Solexa and Illumina 1.3+)
EMBOSS version 6.1.0 patch 1 onwards (interconverts Sanger, Solexa and Illumina 1.3+)
BioPerl version 1.6.1 onwards (interconverts Sanger, Solexa and Illumina 1.3+)
BioRuby version 1.4.0 onwards (interconverts Sanger, Solexa and Illumina 1.3+)
BioJava version 1.7.1 onwards (interconverts Sanger, Solexa and Illumina 1.3+)
See also
The FASTA format, used to represent genome sequences.
The SAM and CRAM formats, used to represent genome sequencer reads that have been aligned to genome sequences.
The GVF format (Genome Variation Format), an extension based on the GFF3 format.
References
External links
MAQ webpage discussing FASTQ variants
Bioinformatics
Biological sequence format | FASTQ format | [
"Engineering",
"Biology"
] | 6,117 | [
"Bioinformatics",
"Biological engineering",
"Biological sequence format"
] |
820,253 | https://en.wikipedia.org/wiki/Helmholtz%20decomposition | In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.
Definition
For a vector field defined on a domain , a Helmholtz decomposition is a pair of vector fields and such that:
Here, is a scalar potential, is its gradient, and is the divergence of the vector field . The irrotational vector field is called a gradient field and is called a solenoidal field or rotation field. This decomposition does not exist for all vector fields and is not unique.
History
The Helmholtz decomposition in three dimensions was first described in 1849 by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858, which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines. Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions. For Riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived.
The decomposition has become an important tool for many problems in theoretical physics, but has also found applications in animation, computer vision as well as robotics.
Three-dimensional space
Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump functions that are defined on a bounded domain. Then, a vector potential can be defined, such that the rotation field is given by , using the curl of a vector field.
Let be a vector field on a bounded domain , which is twice continuously differentiable inside , and let be the surface that encloses the domain with outward surface normal . Then can be decomposed into a curl-free component and a divergence-free component as follows:
where
and is the nabla operator with respect to , not .
If and is therefore unbounded, and vanishes faster than as , then one has
This holds in particular if is twice continuously differentiable in and of bounded support.
Derivation
Solution space
If is a Helmholtz decomposition of , then
is another decomposition if, and only if,
and
where
is a harmonic scalar field,
is a vector field which fulfills
is a scalar field.
Proof:
Set and . According to the definition
of the Helmholtz decomposition, the condition is equivalent to
.
Taking the divergence of each member of this equation yields
, hence is harmonic.
Conversely, given any harmonic function ,
is solenoidal since
Thus, according to the above section, there exists a vector field such that
.
If is another such vector field,
then
fulfills , hence
for some scalar field .
Fields with prescribed divergence and curl
The term "Helmholtz theorem" can also refer to the following. Let be a solenoidal vector field and d a scalar field on which are sufficiently smooth and which vanish faster than at infinity. Then there exists a vector field such that
if additionally the vector field vanishes as , then is unique.
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type. The proof is by a construction generalizing the one given above: we set
where represents the Newtonian potential operator. (When acting on a vector field, such as , it is defined to act on each component.)
Weak formulation
The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field has an orthogonal decomposition:
where is in the Sobolev space of square-integrable functions on whose partial derivatives defined in the distribution sense are square integrable, and , the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.
For a slightly smoother vector field , a similar decomposition holds:
where .
Derivation from the Fourier transform
Note that in the theorem stated here, we have imposed the condition that if is not defined on a bounded domain, then shall decay faster than . Thus, the Fourier transform of , denoted as , is guaranteed to exist. We apply the convention
The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.
Now consider the following scalar and vector fields:
Hence
Longitudinal and transverse fields
A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field . Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have
Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:
Since and ,
we can get
so this is indeed the Helmholtz decomposition.
Generalization to higher dimensions
Matrix approach
The generalization to dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.
Let be a vector field on a bounded domain which decays faster than for and .
The scalar potential is defined similar to the three dimensional case as:
where as the integration kernel is again the fundamental solution of Laplace's equation, but in d-dimensional space:
with the volume of the d-dimensional unit balls and the gamma function.
For , is just equal to , yielding the same prefactor as above.
The rotational potential is an antisymmetric matrix with the elements:
Above the diagonal are entries which occur again mirrored at the diagonal, but with a negative sign.
In the three-dimensional case, the matrix elements just correspond to the components of the vector potential .
However, such a matrix potential can be written as a vector only in the three-dimensional case, because is valid only for .
As in the three-dimensional case, the gradient field is defined as
The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix:
In three-dimensional space, this is equivalent to the rotation of the vector potential.
Tensor approach
In a -dimensional vector space with , can be replaced by the appropriate Green's function for the Laplacian, defined by
where Einstein summation convention is used for the index . For example, in 2D.
Following the same steps as above, we can write
where is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol ,
which is valid in dimensions, where is a -component multi-index. This gives
We can therefore write
where
Note that the vector potential is replaced by a rank- tensor in dimensions.
Because is a function of only , one can replace , giving
Integration by parts can then be used to give
where is the boundary of . These expressions are analogous to those given above for three-dimensional space.
For a further generalization to manifolds, see the discussion of Hodge decomposition below.
Differential forms
The Hodge decomposition is closely related to the Helmholtz decomposition, generalizing from vector fields on R3 to differential forms on a Riemannian manifold M. Most formulations of the Hodge decomposition require M to be compact. Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem.
Extensions to fields not decaying at infinity
Most textbooks only deal with vector fields decaying faster than with at infinity. However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than with , which is substantially less strict.
To achieve this, the kernel in the convolution integrals has to be replaced by .
With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.
For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions.
Uniqueness of the solution
In general, the Helmholtz decomposition is not uniquely defined.
A harmonic function is a function that satisfies .
By adding to the scalar potential , a different Helmholtz decomposition can be obtained:
For vector fields , decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity.
Because is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.
This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.
Applications
Electrodynamics
The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials.
Fluid dynamics
In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator is called the Stokes operator.
Dynamical systems theory
In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases.
For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained:
The Helmholtz decomposition of , with the scalar potential is given as:
The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect.
Medical Imaging
In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free). In this way, the complex shear modulus can be calculated without contributions from compression waves.
Computer animation and robotics
The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.
See also
Clebsch representation for a related decomposition of vector fields
Darwin Lagrangian for an application
Poloidal–toroidal decomposition for a further decomposition of the divergence-free component .
Scalar–vector–tensor decomposition
Hodge theory generalizing Helmholtz decomposition
Polar factorization theorem
Helmholtz–Leray decomposition used for defining the Leray projection
Notes
References
George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists, 4th edition, Academic Press: San Diego (1995) pp. 92–93
George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists – International Edition, 6th edition, Academic Press: San Diego (2005) pp. 95–101
Rutherford Aris, Vectors, tensors, and the basic equations of fluid mechanics, Prentice-Hall (1962), , pp. 70–72
1849 introductions
1849 in science
Vector calculus
Theorems in analysis
Analytic geometry
Hermann von Helmholtz
Theorems in calculus | Helmholtz decomposition | [
"Mathematics"
] | 2,809 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Theorems in calculus",
"Calculus",
"Mathematical problems"
] |
820,572 | https://en.wikipedia.org/wiki/Infiltration%20gallery | An infiltration gallery is a structure including perforated conduits in gravel to expedite transfer of water to or from a soil.
Water supply
Infiltration galleries may be used to collect water from the aquifer underlying a river. Water from an infiltration gallery has the advantage of bank filtration to reduce the water treatment requirements for a surface withdrawal. An infiltration gallery may also be the best way to withdraw water from a thin aquifer or lens of fresh water overlying saline water.
Storm water disposal
Infiltration galleries may be used to supplement a storm sewer, by directing storm runoff from non-road areas.
While the catchbasins under sewer grates work well on swift-flowing surfaces like asphalt and concrete, heavy storm water flow on grass lawns or other open areas will pool in low areas if there is no outlet. An infiltration gallery serves this purpose in two ways.
Primarily, upright plastic pipes capped with simple grates are placed every 5–8 metres along the low point of a slope, to handle heavy surface runoff. The pipes proceed straight down, about two metres, to a horizontal cross-pipe; this pipe is the secondary system.
About ten per cent of the surface area of a horizontal pipe is then perforated slightly and surrounded by gravel. Initially, runoff will exit the pipe and infiltrate the gravel to the soil beyond, dissipating naturally. As flow increases, the water will eventually fill the pipe and need to be dissipated more quickly. Thus, a catchbasin is placed at the lowest point of the sloping ground, which is connected to the storm sewer system at large.
Such galleries are a relatively new development in urban planning, and are thus found in newer housing developments.
References
Drainage
Hydraulic engineering
Subterranea (geography) | Infiltration gallery | [
"Physics",
"Engineering",
"Environmental_science"
] | 363 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
821,148 | https://en.wikipedia.org/wiki/Level%20of%20measurement | Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scales, of measurement: nominal, ordinal, interval, and ratio. This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others. Other classifications include those by Mosteller and Tukey, and by Chrisman.
Stevens's typology
Overview
Stevens proposed his typology in a 1946 Science article titled "On the theory of scales of measurement". In that article, Stevens claimed that all measurement in science was conducted using four different types of scales that he called "nominal", "ordinal", "interval", and "ratio", unifying both "qualitative" (which are described by his "nominal" type) and "quantitative" (to a different degree, all the rest of his scales). The concept of scale types later received the mathematical rigour that it lacked at its inception with the work of mathematical psychologists Theodore Alper (1985, 1987), Louis Narens (1981a, b), and R. Duncan Luce (1986, 1987, 2001). As Luce (1997, p. 395) wrote:
Comparison
Nominal level
A nominal scale consists only of a number of distinct classes or categories, for example: [Cat, Dog, Rabbit]. Unlike the other scales, no kind of relationship between the classes can be relied upon. Thus measuring with the nominal scale is equivalent to classifying.
Nominal measurement may differentiate between items or subjects based only on their names or (meta-)categories and other qualitative classifications they belong to. Thus it has been argued that even dichotomous data relies on a constructivist epistemology. In this case, discovery of an exception to a classification can be viewed as progress.
Numbers may be used to represent the variables but the numbers do not have numerical value or relationship: for example, a globally unique identifier.
Examples of these classifications include gender, nationality, ethnicity, language, genre, style, biological species, and form. In a university one could also use residence hall or department affiliation as examples. Other concrete examples are
in grammar, the parts of speech: noun, verb, preposition, article, pronoun, etc.
in politics, power projection: hard power, soft power, etc.
in biology, the taxonomic ranks below domains: kingdom, phylum, class, etc.
in software engineering, type of fault: specification faults, design faults, and code faults
Nominal scales were often called qualitative scales, and measurements made on qualitative scales were called qualitative data. However, the rise of qualitative research has made this usage confusing. If numbers are assigned as labels in nominal measurement, they have no specific numerical value or meaning. No form of arithmetic computation (+, −, ×, etc.) may be performed on nominal measures. The nominal level is the lowest measurement level used from a statistical point of view.
Mathematical operations
Equality and other operations that can be defined in terms of equality, such as inequality and set membership, are the only non-trivial operations that generically apply to objects of the nominal type.
Central tendency
The mode, i.e. the most common item, is allowed as the measure of central tendency for the nominal type. On the other hand, the median, i.e. the middle-ranked item, makes no sense for the nominal type of data since ranking is meaningless for the nominal type.
Ordinal scale
The ordinal type allows for rank order (1st, 2nd, 3rd, etc.) by which data can be sorted but still does not allow for a relative degree of difference between them. Examples include, on one hand, dichotomous data with dichotomous (or dichotomized) values such as "sick" vs. "healthy" when measuring health, "guilty" vs. "not-guilty" when making judgments in courts, "wrong/false" vs. "right/true" when measuring truth value, and, on the other hand, non-dichotomous data consisting of a spectrum of values, such as "completely agree", "mostly agree", "mostly disagree", "completely disagree" when measuring opinion.
The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule. Rank orders represent ordinal scales and are frequently used in research relating to qualitative phenomena. A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10th and Ganga's position is 40th, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Central tendency
The median, i.e. middle-ranked, item is allowed as the measure of central tendency; however, the mean (or average) as the measure of central tendency is not allowed. The mode is allowed.
In 1946, Stevens observed that psychological measurement, such as measurement of opinions, usually operates on ordinal scales; thus means and standard deviations have no validity, but they can be used to get ideas for how to improve operationalization of variables used in questionnaires. Most psychological data collected by psychometric instruments and tests, measuring cognitive and other abilities, are ordinal, although some theoreticians have argued they can be treated as interval or ratio scales. However, there is little prima facie evidence to suggest that such attributes are anything more than ordinal (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008). In particular, IQ scores reflect an ordinal scale, in which all scores are meaningful for comparison only. There is no absolute zero, and a 10-point difference may carry different meanings at different points of the scale.
Interval scale
The interval type allows for defining the degree of difference between measurements, but not the ratio between measurements. Examples include temperature scales with the Celsius scale, which has two defined points (the freezing and boiling point of water at specific conditions) and then separated into 100 intervals, date when measured from an arbitrary epoch (such as AD), location in Cartesian coordinates, and direction measured in degrees from true or magnetic north. Ratios are not meaningful since 20 °C cannot be said to be "twice as hot" as 10 °C (unlike temperature in kelvins), nor can multiplication/division be carried out between any two dates directly. However, ratios of differences can be expressed; for example, one difference can be twice another; for example, the ten-degree difference between 15 °C and 25 °C is twice the five-degree difference between 17 °C and 22 °C. Interval type variables are sometimes also called "scaled variables", but the formal mathematical term is an affine space (in this case an affine line).
Central tendency and statistical dispersion
The mode, median, and arithmetic mean are allowed to measure central tendency of interval variables, while measures of statistical dispersion include range and standard deviation. Since one can only divide by differences, one cannot define measures that require some ratios, such as the coefficient of variation. More subtly, while one can define moments about the origin, only central moments are meaningful, since the choice of origin is arbitrary. One can define standardized moments, since ratios of differences are meaningful, but one cannot define the coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
Ratio scale
See also:
The ratio type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit of measurement of the same kind (Michell, 1997, 1999). Most measurement in the physical sciences and engineering is done on ratio scales. Examples include mass, length, duration, plane angle, energy and electric charge. In contrast to interval scales, ratios can be compared using division. Very informally, many ratio scales can be described as specifying "how much" of something (i.e. an amount or magnitude). Ratio scale is often used to express an order of magnitude such as for temperature in Orders of magnitude (temperature).
Central tendency and statistical dispersion
The geometric mean and the harmonic mean are allowed to measure the central tendency, in addition to the mode, median, and arithmetic mean. The studentized range and the coefficient of variation are allowed to measure statistical dispersion. All statistical measures are allowed because all necessary mathematical operations are defined for the ratio scale.
Debate on Stevens's typology
While Stevens's typology is widely adopted, it is still being challenged by other theoreticians, particularly in the cases of the nominal and ordinal types (Michell, 1986). Duncan (1986), for example, objected to the use of the word measurement in relation to the nominal type and Luce (1997) disagreed with Stevens's definition of measurement.
On the other hand, Stevens (1975) said of his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". Hand says, "Basic psychology texts often begin with Stevens's framework and the ideas are ubiquitous. Indeed, the essential soundness of his hierarchy has been established for representational measurement by mathematicians, determining the invariance properties of mappings from empirical systems to real number continua. Certainly the ideas have been revised, extended, and elaborated, but the remarkable thing is his insight given the relatively limited formal apparatus available to him and how many decades have passed since he coined them."
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such as SPSS requires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstone made progress toward developing a justification for obtaining the interval type, based on the law of comparative judgment. A common application of the law is the analytic hierarchy process. Further progress was made by Georg Rasch (1960), who developed the probabilistic Rasch model that provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Other proposed typologies
Typologies aside from Stevens's typology have been proposed. For instance, Mosteller and Tukey (1977) and Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998), van den Berg (1991).
Mosteller and Tukey's typology (1977)
Mosteller and Tukey noted that the four levels are not exhaustive and proposed seven instead:
Names
Grades (ordered labels like beginner, intermediate, advanced)
Ranks (orders with 1 being the smallest or largest, 2 the next smallest or largest, and so on)
Counted fractions (bound by 0 and 1)
Counts (non-negative integers)
Amounts (non-negative real numbers)
Balances (any real number)
For example, percentages (a variation on fractions in the Mosteller–Tukey framework) do not fit well into Stevens's framework: No transformation is fully admissible.
Chrisman's typology (1998)
Nicholas R. Chrisman introduced an expanded list of levels of measurement to account for various measurements that do not necessarily fit with the traditional notions of levels of measurement. Measurements bound to a range and repeating (like degrees in a circle, clock time, etc.), graded membership categories, and other types of measurement do not fit to Stevens's original work, leading to the introduction of six new levels of measurement, for a total of ten:
Nominal
Gradation of membership
Ordinal
Interval
Log-interval
Extensive ratio
Cyclical ratio
Derived ratio
Counts
Absolute
While some claim that the extended levels of measurement are rarely used outside of academic geography, graded membership is central to fuzzy set theory, while absolute measurements include probabilities and the plausibility and ignorance in Dempster–Shafer theory. Cyclical ratio measurements include angles and times. Counts appear to be ratio measurements, but the scale is not arbitrary and fractional counts are commonly meaningless. Log-interval measurements are commonly displayed in stock market graphics. All these types of measurements are commonly used outside academic geography, and do not fit well to Stevens's original work.
Scale types and Stevens's "operational theory of measurement"
The theory of scale types is the intellectual handmaiden to Stevens's "operational theory of measurement", which was to become definitive within psychology and the behavioral sciences, despite Michell's characterization as its being quite at odds with measurement in the natural sciences (Michell, 1999). Essentially, the operational theory of measurement was a reaction to the conclusions of a committee established in 1932 by the British Association for the Advancement of Science to investigate the possibility of genuine scientific measurement in the psychological and behavioral sciences. This committee, which became known as the Ferguson committee, published a Final Report (Ferguson, et al., 1940, p. 245) in which Stevens's sone scale (Stevens & Davis, 1938) was an object of criticism:
That is, if Stevens's sone scale genuinely measured the intensity of auditory sensations, then evidence for such sensations as being quantitative attributes needed to be produced. The evidence needed was the presence of additive structure—a concept comprehensively treated by the German mathematician Otto Hölder (Hölder, 1901). Given that the physicist and measurement theorist Norman Robert Campbell dominated the Ferguson committee's deliberations, the committee concluded that measurement in the social sciences was impossible due to the lack of concatenation operations. This conclusion was later rendered false by the discovery of the theory of conjoint measurement by Debreu (1960) and independently by Luce & Tukey (1964). However, Stevens's reaction was not to conduct experiments to test for the presence of additive structure in sensations, but instead to render the conclusions of the Ferguson committee null and void by proposing a new theory of measurement:
Stevens was greatly influenced by the ideas of another Harvard academic, the Nobel laureate physicist Percy Bridgman (1927), whose doctrine of operationalism Stevens used to define measurement. In Stevens's definition, for example, it is the use of a tape measure that defines length (the object of measurement) as being measurable (and so by implication quantitative). Critics of operationalism object that it confuses the relations between two objects or events for properties of one of those of objects or events (Moyer, 1981a, b; Rogers, 1989).
The Canadian measurement theorist William Rozeboom was an early and trenchant critic of Stevens's theory of scale types.
Same variable may be different scale type depending on context
Another issue is that the same variable may be a different scale type depending on how it is measured and on the goals of the analysis. For example, hair color is usually thought of as a nominal variable, since it has no apparent ordering. However, it is possible to order colors (including hair colors) in various ways, including by hue; this is known as colorimetry. Hue is an interval level variable.
See also
Cohen's kappa
Coherence (units of measurement)
Hume's principle
Inter-rater reliability
Logarithmic scale
Ramsey–Lewis method
Set theory
Statistical data type
Transition (linguistics)
References
Further reading
Briand, L. & El Emam, K. & Morasca, S. (1995). On the Application of Measurement Theory in Software Engineering. Empirical Software Engineering, 1, 61–88. [On line] https://web.archive.org/web/20070926232755/http://www2.umassd.edu/swpi/ISERN/isern-95-04.pdf
Cliff, N. (1996). Ordinal Methods for Behavioral Data Analysis. Mahwah, NJ: Lawrence Erlbaum.
Cliff, N. & Keats, J. A. (2003). Ordinal Measurement in the Behavioral Sciences. Mahwah, NJ: Erlbaum.
See also reprints in:
Readings in Statistics, Ch. 3, (Haber, A., Runyon, R. P., and Badia, P.) Reading, Mass: Addison–Wesley, 1970
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison–Wesley.
Luce, R. D. (2000). Utility of uncertain gains and losses: measurement theoretic and experimental approaches. Mahwah, N.J.: Lawrence Erlbaum.
Michell, J. (1999). Measurement in Psychology – A critical history of a methodological concept. Cambridge: Cambridge University Press.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research.
Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1–49). New York: Wiley.
Stevens, S. S. (1975). Psychophysics. New York: Wiley.
Scientific method
Statistical data types
Measurement
Cognitive science | Level of measurement | [
"Physics",
"Mathematics"
] | 4,028 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
821,611 | https://en.wikipedia.org/wiki/Complex%20harmonic%20motion | In physics, complex harmonic motion is a complicated realm based on the simple harmonic motion. The word "complex" refers to different situations. Unlike simple harmonic motion, which is regardless of air resistance, friction, etc., complex harmonic motion often has additional forces to dissipate the initial energy and lessen the speed and amplitude of an oscillation until the energy of the system is totally drained and the system comes to rest at its equilibrium point.
Types
Damped harmonic motion
Introduction
Damped harmonic motion is a real oscillation, in which an object is hanging on a spring. Because of the existence of internal friction and air resistance, the system will over time experience a decrease in amplitude. The decrease of amplitude is due to the fact that the energy goes into thermal energy.
Damped harmonic motion happens because the spring is not very efficient at storing and releasing energy so that the energy dies out. The damping force is proportional to the velocity of the object and is at the opposite direction of the motion so that the object slows down quickly. Specifically, when an object is damping, the damping force will be related to velocity by a coefficient :
The diagram shown on the right indicates three types of damped harmonic motion.
Critically damped: The system returns to equilibrium as quickly as possible without oscillating.
Underdamped: The system oscillates (at reduced frequency compared to the undamped case) with the amplitude gradually decreasing to zero.
Overdamped: The system returns (exponentially decays) to equilibrium without oscillating.
Difference between damped and forced oscillation
An object or a system is oscillating in its own natural frequency without the interference of an external periodic force or initial motion. Damped oscillation is similar to forced oscillation except that it has continuous and repeated force. Hence, these are two motions that have opposite results.
Examples
Bungee jumper provides a large force of bouncing by compressing the springs underneath it. The compression theoretically turns the kinetic energy into elastic potential energy. When the elastic potential energy reaches its top boundary, it can be exerted onto the object or child that presses on it within the form of kinetic energy.
Rubber band works the same as the spring.
Resonance
Introduction
Resonance occurs when the frequency of the external force (applied) is the same as the natural frequency (resonant frequency) of the system. When such a situation occurs, the external force always acts in the same direction as the motion of the oscillating object, with the result that the amplitude of the oscillation increases indefinitely, as it's shown in the adjacent diagram. Away from the value of resonant frequency, either greater or lesser, the amplitude of the corresponding frequency is smaller.
In a set of driving pendulums with different length of strings hanging objects, the one pendulum with the same length of string as the driver gets the biggest amplitude of swinging.
Examples
Parts of a car may vibrate if you drive over a bumpy road at a speed where the vibrations transmitted to the body are at the resonant frequency of that part (though most cars are designed with parts with natural frequencies that are not likely to be produced by driving).
Bass frequencies from stereo speakers can make a room resonate, particularly annoying if you live next door and your living room resonates due to your neighbour's music.
A man walks across a field carrying a long plank on his shoulder. At each step the plank flexes a little (a) and the ends move up and down. He then starts to trot and as a result bounces up and down (b). At one particular speed resonance will occur between the motion of the man and the plank and the ends of the plank then oscillate with large amplitude.
When using a microwave oven to cook food, the micro wave travels through the food, causing the water molecules vibrate in the same frequency, which is similar to resonance, so that the food as a whole, gets hot fast.
Some of the helicopter crashes are caused by resonance too. The eyeballs of the pilot resonate because of excessive pressure in the upper air, making the pilot unable to see overhead power lines. As a result, the helicopter is out of control.
Resonance of two identical tune forks.
Mircrowave ovens use resonance to vibrate polar molecules which collide and manifest their energy transfer as heat.
See video: https://www.youtube.com/watch?v=aCocQa2Bcuc
Double pendulum
Introduction
A double pendulum is a simple pendulum hanging under another one; the epitome of the compound pendulum system. It shows abundant dynamic behavior. The motion of a double pendulum seems chaotic. We can hardly see a regulated routine that it is following, making it complicated. Varying lengths and masses of the two arms can make it hard to identify the centers of the two rods. Moreover, a double pendulum may exert motion without the restriction of only a two-dimensional (usually vertical) plane. In other words, the complex pendulum can move to anywhere within the sphere, which has the radius of the total length of the two pendulums. However, for a small angle, the double pendulum can act similarly to the simple pendulum because the motion is determined by sine and cosine functions as well.
Examples
The image shows a marine clock with motor springs and double pendulum sheel.
See also
Cymatics
Lissajous curve
Double pendulum
Resonance
References
Classical mechanics
Articles containing video clips
Motion (physics) | Complex harmonic motion | [
"Physics"
] | 1,122 | [
"Physical phenomena",
"Classical mechanics",
"Motion (physics)",
"Space",
"Mechanics",
"Spacetime"
] |
821,877 | https://en.wikipedia.org/wiki/Inductively%20coupled%20plasma | An inductively coupled plasma (ICP) or transformer coupled plasma (TCP) is a type of plasma source in which the energy is supplied by electric currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields.
Operation
There are three types of ICP geometries: planar (Fig. 3 (a)), cylindrical (Fig. 3 (b)), and half-toroidal (Fig. 3 (c)).
In planar geometry, the electrode is a length of flat metal wound like a spiral (or coil). In cylindrical geometry, it is like a helical spring. In half-toroidal geometry, it is a toroidal solenoid cut along its main diameter to two equal halves.
When a time-varying electric current is passed through the coil, it creates a time-varying magnetic field around it, with flux
,
where r is the distance to the center of coil (and of the quartz tube).
According to the Faraday–Lenz's law of induction, this creates azimuthal electromotive force in the rarefied gas:
,
which corresponds to electric field strengths of
,
leading to the formation of the electron trajectories providing a plasma generation. The dependence on r suggests that the gas ion motion is most intense in the outer region of the flame, where the temperature is the greatest. In the real torch, the flame is cooled by the cooling gas from the outside , so the hottest outer part is at thermal equilibrium. Temperature there reaches 5 000 – 6 000 K. For more rigorous description, see Hamilton–Jacobi equation in electromagnetic fields.
The frequency of alternating current used in the RLC circuit which contains the coil is usually 27–41 MHz. To induce plasma, a spark is produced at the electrodes at the gas outlet. Argon is one example of a commonly used rarefied gas. The high temperature of the plasma allows the atomization of molecules and thus determination of many elements, and in addition, for about 60 elements the degree of ionization in the torch exceeds 90%. The ICP torch consumes c. 1250–1550 W of power, and this depends on the element composition of the sample (due to different ionization energies).
The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density. Transition from E to H heating mode occurs with external inputs.
Applications
Plasma electron temperatures can range between ~6,000 K and ~10,000 K and are usually several orders of magnitude greater than the temperature of the neutral species. Temperatures of argon ICP plasma discharge are typically ~5,500 to 6,500 K and are therefore comparable to those reached at the surface (photosphere) of the sun (~4,500 K to ~6,000 K). ICP discharges are of relatively high electron density, on the order of 1015 cm−3. As a result, ICP discharges have wide applications wherever a high-density plasma (HDP) is needed.
ICP-AES/ICP-OES, a type of atomic emission spectroscopy.
ICP-MS, a type of mass spectrometry.
ICP-RIE, a type of reactive-ion etching.
Another benefit of ICP discharges is that they are relatively free of contamination, because the electrodes are completely outside the reaction chamber. By contrast, in a capacitively coupled plasma (CCP), the electrodes are often placed inside the reactor chamber and are thus exposed to the plasma and to subsequent reactive chemical species.
See also
Capacitively coupled plasma
Induction plasma technology
Pulsed inductive thruster
References
Electrodynamics
Spectroscopy
Ion source
Plasma technology and applications | Inductively coupled plasma | [
"Physics",
"Chemistry",
"Mathematics"
] | 785 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Plasma physics",
"Plasma technology and applications",
"Instrumental analysis",
"Ion source",
"Mass spectrometry",
"Electrodynamics",
"Spectroscopy",
"Dynamical systems"
] |
822,294 | https://en.wikipedia.org/wiki/Drain-waste-vent%20system | A drain-waste-vent system (or DWV) is the combination of pipes and plumbing fittings that captures sewage and greywater within a structure and routes it toward a water treatment system. It includes venting to the exterior environment to prevent a vacuum from forming and impeding fixtures such as sinks, showers, and toilets from draining freely, and employs water-filled traps to block dangerous sewer gasses from entering a plumbed structure.
Overview
DWV systems capture both sewage and greywater within a structure and safely route it out via the low point of its "soil stack" to a waste treatment system, either via a municipal sanitary sewer system, or to a septic tank and leach field. (Cesspits are generally prohibited in developed areas.) For such drainage systems to work properly it is crucial that neutral air pressure be maintained within all pipes, allowing free gravity flow of water and sewage through drains. It is critical that a sufficient fall gradient (downward slope) be maintained throughout the drain pipes to keep liquids and entrained solids flowing freely from a building towards the main drain. In situations where a downward slope out of a building en route to a treatment system cannot be created, a special collection sump pit and grinding lift "sewage ejector" pump are needed. By contrast, potable water supply systems are pressurized up to or more and so do not require a continuous downward slope in their piping to distribute water through buildings.
Every fixture is required to have an internal or external trap to prevent sewer gases from entering a structure. Double trapping is prohibited by plumbing codes due to its susceptibility to clogging. In the U.S., every plumbing fixture must also be coupled to the system's vent piping. Without a vent, negative pressure can slow the flow of water leaving the system, resulting in clogs, or cause siphonage to empty a trap. The high point of the vent system (the top of its "soil stack") must be open to the exterior at atmospheric pressure. On large systems, separate parallel vent stacks may also be run to ensure sufficient airflow, because the number of devices linked to an atmospheric vent, and their distances from it, are regulated by plumbing code.
Operation
A sewer pipe is normally at neutral air pressure compared to the surrounding atmosphere. When a column of waste water flows through a pipe, it compresses air ahead of it in the system, creating a positive pressure that must be released so it does not push back on the waste stream and downstream traps, slow drainage, and induce potential clogs. As the column of water passes, air must also freely flow in behind the waste stream, or negative pressure results, which can siphon water out of a trap after it is passed and allow noxious sewer gases to enter a building. The extent of these pressure fluctuations is determined by the fluid volume of the waste discharge.
Generally, a toilet outlet has the shortest trap seal, making it most vulnerable to being emptied by induced siphonage.
An additional risk of pressurizing a system ahead of a waste stream is the potential for it to overwhelm a downstream trap and force tainted water into its fixture. Serious hygiene and health consequences can result. Tall buildings of three or more stories are particularly susceptible to this problem. Adequate supplementary vent stacks are installed in parallel to waste stacks to allow proper venting in large and tall buildings and eliminate these pressure-related venting problems.
External venting
DWV systems are vented directly through the building roof. Increasingly DWV pipe is ABS or PVC DWV-rated plastic pipe equipped with a flashing at the roof penetration to prevent rainwater from entering the buildings. Older structures may use asbestos, copper, iron, lead or clay pipes, in rough order of era of use.
Under many older building codes, a vent stack (a pipe leading to the main roof vent) is required to be within approx. a radius of the draining fixture it serves (sink, toilet, shower stall, etc.). To allow a single roof penetration as permitted by local building code, sub-vents may be tied together inside the building and exit via a common vent stack, frequently the "main" vent. Adding a vent connection within a long horizontal run with little slope will aid flow, and when used with a cleanout allows for better serviceability.
Unlike traps for other fixtures, toilet traps are usually designed to self-siphon to ensure complete evacuation of their contents; toilet bowls are then automatically refilled by a special valve mechanism.
Internal venting
In exceptional cases it is either not possible or inconvenient to vent a fixture or fixtures externally. In such cases a resort to "internal venting" may be viable, where compliant with local plumbing codes. Such alternatives include mechanical vents (also called cheater vents) such as air admittance valves and check vents, and "plumb-arounds" such as an inline vent employed in kitchen islands and similar applications:
Air admittance valves (AAVs, or commonly referred to in the UK as Durgo valves and in the US as Studor vents and Sure-Vent®) are negative-pressure-activated, one-way mechanical valves, used in a plumbing or drainage venting system to eliminate the need for conventional pipe venting and roof penetrations. A discharge of wastewater causes the AAV to open, releasing the vacuum and allowing air to enter the plumbing vent pipe for proper pressure equalization.
Since AAVs will only operate under negative pressure situations, they are not suitable for all venting applications, such as venting a sump, where positive pressures are created when the sump fills. Also, where positive drainage pressures are found in larger buildings or multi-story buildings, an air admittance valve could be used in conjunction with a positive pressure reduction device such as the PAPA positive air pressure attenuator to provide a complete venting solution for more complicated drainage venting systems.
Using AAVs can significantly reduce the amount of venting materials needed in a plumbing system, increase plumbing labor efficiency, allow greater flexibility in the layout of plumbing fixtures, and reduce long-term roof maintenance problems associated with conventional vent stack roofing penetrations.
While some state and local building departments prohibit AAVs, the International Residential and International Plumbing Codes allow it to be used in place of a vent through the roof. AAVs are certified to reliably open and close a minimum of 500,000 times, (approximately 30 years of use) with no release of sewer gas; some manufacturers claim their units are tested for up to 1.5 million cycles, or at least 80 years of use. AAVs have been effectively used in Europe for more than two decades.
Check vents
In-line vent (also known as an island fixture vent, and, colloquially, a "Chicago Loop", "Boston loop" or "Bow Vent") is an alternate method permissible in some jurisdictions of venting the trap installed on an under counter island sink or other similar applications where a conventional vertical vent stack or air admittance valve is not feasible or allowed.
As with all drains, ventilation must be provided to allow the flowing waste water to displace the sewer gas in the drain, and then to allow air (or some other fluid) to fill the vacuum which would otherwise form as the water flows down the pipe.
An island fixture vent allows water displaces the sewer gas up to the sanitary tee, the water flows downward while sewer gas is displaced upward and toward the vent. The vent can also provide air to fill any vacuum created.
The key to a functional island fixture vent is that the top elbow must be at least as high as the "flood level" (the peak possible drain water level in the sink), allowing it to serve as a de facto vacuum breaker preventing the loop from becoming a siphon for an overfilled sink, as from a clogged drain (rather than vent) line.
Fittings
All DWV systems require various sized fittings and pipes which are measured by their internal diameter of both the pipes and the fittings which, and in most cases are Schedule 40 PVC wye's, tee's, elbows ranging from 90 degrees to 22.5 degrees for both inside diameter fitment (street) as well as outer diameter fitment (hub), repair and slip couplings, reducer couplings, and pipe which is typically ten feet in length. Sizes for hub fittings such as wye's and tee's are based on the inside diameter of the pipe that goes into their hubs. Items such as washer boxes and Studor vents are also measured by the internal diameter of the fittings.
Cost of materials, ease of installation, and resistance to corrosion all have come to favor Schedule 40 PVC DWV systems, which are replacing cast iron "hub" and "no-hub" DWV systems in many municipalities, while parts and skills associated with installing and maintaining cast iron systems are becoming increasingly scarce and costly.
The advent of PVC and solvent welding adhesives, which secure fittings against leakage and separation by melting the material into itself, has profoundly simplified and made installing a DWV system less expensive. As with pressurized water "supply" plumbing, all lines must be bored for where they will not compromise structural framing and properly supported inline, and all external penetrations properly sealed and flashed.
See also
Fuel gas piping
Plumber
Potable cold and hot water supply
Rainwater, surface, and subsurface water drainage
References
Further reading
Building engineering | Drain-waste-vent system | [
"Engineering"
] | 1,952 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
824,435 | https://en.wikipedia.org/wiki/Molar%20refractivity | Molar refractivity, , is a measure of the total polarizability of a mole of a substance.
For a perfect dielectric which is made of one type of molecule, the molar refractivity is proportional to the polarizability of a single molecule of the substance. For real materials, intermolecular interactions (the effect of the induced dipole moment of one molecule on the field felt by nearby molecules) give rise to a density dependence.
The molar refractivity is commonly expressed as a sum of components, where the leading order is the value for a perfect dielectric, followed by the density-dependent corrections:
The coefficients are called the refractivity virial coefficients. Some research papers are dedicated to finding the values of the subleading coefficients of different substances. In other contexts, the material can be assumed to be approximately perfect, so that the only coefficient of interest is .
The coefficients depend on the wavelength of the applied field (and on the type and composition of the material), but not on thermodynamic state variables such as temperature or pressure.
The leading order (perfect dielectric) molar refractivity is defined as
where is the Avogadro constant and is the mean polarizability of a molecule.
Substituting the molar refractivity into the Lorentz-Lorenz formula gives, for gasses
where is the refractive index, is the pressure of the gas, is the universal gas constant, and is the (absolute) temperature; the ideal gas law was used here to convert the particle density (appearing in the Lorentz-Lorenz formula) to pressure and temperature.
For a gas, , so the molar refractivity can be approximated by
The molar refractivity does not depend on , or , since they are not independent quantities.
In terms of density ρ and molecular weight M, it can be shown that:
Notes
References
Bibliography
Born, Max, and Wolf, Emil, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (7th ed.), section 2.3.3, Cambridge University Press (1999)
Physical chemistry
Optical quantities
Molar quantities | Molar refractivity | [
"Physics",
"Chemistry",
"Mathematics"
] | 453 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Intensive quantities",
"nan",
"Optical quantities",
"Physical chemistry",
"Molar quantities"
] |
825,735 | https://en.wikipedia.org/wiki/Verlet%20integration | Verlet integration () is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Jean Baptiste Delambre and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by P. H. Cowell and A. C. C. Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the trajectories of electrical particles in a magnetic field (hence it is also called Størmer's method).
The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method.
Basic Størmer–Verlet
For a second-order differential equation of the type with initial conditions and , an approximate numerical solution at the times with step size can be obtained by the following method:
set ,
for n = 1, 2, ... iterate
Equations of motion
Newton's equation of motion for conservative physical systems is
or individually
where
is the time,
is the ensemble of the position vector of objects,
is the scalar potential function,
is the negative gradient of the potential, giving the ensemble of forces on the particles,
is the mass matrix, typically diagonal with blocks with mass for every particle.
This equation, for various choices of the potential function , can be used to describe the evolution of diverse physical systems, from the motion of interacting molecules to the orbit of the planets.
After a transformation to bring the mass to the right side and forgetting the structure of multiple particles, the equation may be simplified to
with some suitable vector-valued function representing the position-dependent acceleration. Typically, an initial position and an initial velocity are also given.
Verlet integration (without velocities)
To discretize and numerically solve this initial value problem, a time step is chosen, and the sampling-point sequence considered. The task is to construct a sequence of points that closely follow the points on the trajectory of the exact solution.
Where Euler's method uses the forward difference approximation to the first derivative in differential equations of order one, Verlet integration can be seen as using the central difference approximation to the second derivative:
Verlet integration in the form used as the Størmer method uses this equation to obtain the next position vector from the previous two without using the velocity as
Discretisation error
The time symmetry inherent in the method reduces the level of local errors introduced into the integration by the discretization by removing all odd-degree terms, here the terms in of degree three. The local error is quantified by inserting the exact values into the iteration and computing the Taylor expansions at time of the position vector in different time directions:
where is the position, the velocity, the acceleration, and the jerk (third derivative of the position with respect to the time).
Adding these two expansions gives
We can see that the first- and third-order terms from the Taylor expansion cancel out, thus making the Verlet integrator an order more accurate than integration by simple Taylor expansion alone.
Caution should be applied to the fact that the acceleration here is computed from the exact solution, , whereas in the iteration it is computed at the central iteration point, . In computing the global error, that is the distance between exact solution and approximation sequence, those two terms do not cancel exactly, influencing the order of the global error.
A simple example
To gain insight into the relation of local and global errors, it is helpful to examine simple examples where the exact solution, as well as the approximate solution, can be expressed in explicit formulas. The standard example for this task is the exponential function.
Consider the linear differential equation with a constant . Its exact basis solutions are and .
The Størmer method applied to this differential equation leads to a linear recurrence relation
or
It can be solved by finding the roots of its characteristic polynomial
. These are
The basis solutions of the linear recurrence are and . To compare them with the exact solutions, Taylor expansions are computed:
The quotient of this series with the one of the exponential starts with , so
From there it follows that for the first basis solution the error can be computed as
That is, although the local discretization error is of order 4, due to the second order of the differential equation the global error is of order 2, with a constant that grows exponentially in time.
Starting the iteration
Note that at the start of the Verlet iteration at step , time , computing , one already needs the position vector at time . At first sight, this could give problems, because the initial conditions are known only at the initial time . However, from these the acceleration is known, and a suitable approximation for the position at the first time step can be obtained using the Taylor polynomial of degree two:
The error on the first time step then is of order . This is not considered a problem because on a simulation over a large number of time steps, the error on the first time step is only a negligibly small amount of the total error, which at time is of the order , both for the distance of the position vectors to as for the distance of the divided differences to . Moreover, to obtain this second-order global error, the initial error needs to be of at least third order.
Non-constant time differences
A disadvantage of the Størmer–Verlet method is that if the time step () changes, the method does not approximate the solution to the differential equation. This can be corrected using the formula
A more exact derivation uses the Taylor series (to second order) at for times and to obtain after elimination of
so that the iteration formula becomes
Computing velocities – Størmer–Verlet method
The velocities are not explicitly given in the basic Størmer equation, but often they are necessary for the calculation of certain physical quantities like the kinetic energy. This can create technical challenges in molecular dynamics simulations, because kinetic energy and instantaneous temperatures at time cannot be calculated for a system until the positions are known at time . This deficiency can either be dealt with using the velocity Verlet algorithm or by estimating the velocity using the position terms and the mean value theorem:
Note that this velocity term is a step behind the position term, since this is for the velocity at time , not , meaning that is a second-order approximation to . With the same argument, but halving the time step, is a second-order approximation to , with .
One can shorten the interval to approximate the velocity at time at the cost of accuracy:
Velocity Verlet
A related, and more commonly used algorithm is the velocity Verlet algorithm, similar to the leapfrog method, except that the velocity and position are calculated at the same value of the time variable (leapfrog does not, as the name suggests). This uses a similar approach, but explicitly incorporates velocity, solving the problem of the first time step in the basic Verlet algorithm:
It can be shown that the error in the velocity Verlet is of the same order as in the basic Verlet. Note that the velocity algorithm is not necessarily more memory-consuming, because, in basic Verlet, we keep track of two vectors of position, while in velocity Verlet, we keep track of one vector of position and one vector of velocity. The standard implementation scheme of this algorithm is:
Calculate .
Calculate .
Derive from the interaction potential using .
Calculate .
This algorithm also works with variable time steps, and is identical to the 'kick-drift-kick' form of leapfrog method integration.
Eliminating the half-step velocity, this algorithm may be shortened to
Calculate .
Derive from the interaction potential using .
Calculate .
Note, however, that this algorithm assumes that acceleration only depends on position and does not depend on velocity .
One might note that the long-term results of velocity Verlet, and similarly of leapfrog are one order better than the semi-implicit Euler method. The algorithms are almost identical up to a shift by half a time step in the velocity. This can be proven by rotating the above loop to start at step 3 and then noticing that the acceleration term in step 1 could be eliminated by combining steps 2 and 4. The only difference is that the midpoint velocity in velocity Verlet is considered the final velocity in semi-implicit Euler method.
The global error of all Euler methods is of order one, whereas the global error of this method is, similar to the midpoint method, of order two. Additionally, if the acceleration indeed results from the forces in a conservative mechanical or Hamiltonian system, the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order one for semi-explicit Euler and order two for Verlet-leapfrog. The same goes for all other conserved quantities of the system like linear or angular momentum, that are always preserved or nearly preserved in a symplectic integrator.
The velocity Verlet method is a special case of the Newmark-beta method with and .
Algorithmic representation
Since velocity Verlet is a generally useful algorithm in 3D applications, a solution written in C++ could look like below. This type of position integration will significantly increase accuracy in 3D simulations and games when compared with the regular Euler method.struct Body
{
Vec3d pos { 0.0, 0.0, 0.0 };
Vec3d vel { 2.0, 0.0, 0.0 }; // 2 m/s along x-axis
Vec3d acc { 0.0, 0.0, 0.0 }; // no acceleration at first
double mass = 1.0; // 1kg
/**
* Updates pos and vel using "Velocity Verlet" integration
* @param dt DeltaTime / time step [eg: 0.01]
*/
void update(double dt)
{
Vec3d new_pos = pos + vel*dt + acc*(dt*dt*0.5);
Vec3d new_acc = apply_forces();
Vec3d new_vel = vel + (acc+new_acc)*(dt*0.5);
pos = new_pos;
vel = new_vel;
acc = new_acc;
}
/**
* To apply velocity to your objects, calculate the required Force vector instead
* and apply the accumulated forces here.
*/
Vec3d apply_forces() const
{
Vec3d new_acc = Vec3d{0.0, 0.0, -9.81 }; // 9.81 m/s² down in the z-axis
// Apply any other forces here...
// NOTE: Avoid depending on `vel` because Velocity Verlet assumes acceleration depends on position.
return new_acc;
}
};
Error terms
The global truncation error of the Verlet method is , both for position and velocity.
This is in contrast with the fact that the local error in position is only as described above. The difference is due to the accumulation of the local truncation error over all of the iterations.
The global error can be derived by noting the following:
and
Therefore
Similarly:
which can be generalized to (it can be shown by induction, but it is given here without proof):
If we consider the global error in position between and , where , it is clear that
and therefore, the global (cumulative) error over a constant interval of time is given by
Because the velocity is determined in a non-cumulative way from the positions in the Verlet integrator, the global error in velocity is also .
In molecular dynamics simulations, the global error is typically far more important than the local error, and the Verlet integrator is therefore known as a second-order integrator.
Constraints
Systems of multiple particles with constraints are simpler to solve with Verlet integration than with Euler methods. Constraints between points may be, for example, potentials constraining them to a specific distance or attractive forces. They may be modeled as springs connecting the particles. Using springs of infinite stiffness, the model may then be solved with a Verlet algorithm.
In one dimension, the relationship between the unconstrained positions and the actual positions of points at time , given a desired constraint distance of , can be found with the algorithm
Verlet integration is useful because it directly relates the force to the position, rather than solving the problem using velocities.
Problems, however, arise when multiple constraining forces act on each particle. One way to solve this is to loop through every point in a simulation, so that at every point the constraint relaxation of the last is already used to speed up the spread of the information. In a simulation this may be implemented by using small time steps for the simulation, using a fixed number of constraint-solving steps per time step, or solving constraints until they are met by a specific deviation.
When approximating the constraints locally to first order, this is the same as the Gauss–Seidel method. For small matrices it is known that LU decomposition is faster. Large systems can be divided into clusters (for example, each ragdoll = cluster). Inside clusters the LU method is used, between clusters the Gauss–Seidel method is used. The matrix code can be reused: The dependency of the forces on the positions can be approximated locally to first order, and the Verlet integration can be made more implicit.
Sophisticated software, such as SuperLU exists to solve complex problems using sparse matrices. Specific techniques, such as using (clusters of) matrices, may be used to address the specific problem, such as that of force propagating through a sheet of cloth without forming a sound wave.
Another way to solve holonomic constraints is to use constraint algorithms.
Collision reactions
One way of reacting to collisions is to use a penalty-based system, which basically applies a set force to a point upon contact. The problem with this is that it is very difficult to choose the force imparted. Use too strong a force, and objects will become unstable, too weak, and the objects will penetrate each other. Another way is to use projection collision reactions, which takes the offending point and attempts to move it the shortest distance possible to move it out of the other object.
The Verlet integration would automatically handle the velocity imparted by the collision in the latter case; however, note that this is not guaranteed to do so in a way that is consistent with collision physics (that is, changes in momentum are not guaranteed to be realistic). Instead of implicitly changing the velocity term, one would need to explicitly control the final velocities of the objects colliding (by changing the recorded position from the previous time step).
The two simplest methods for deciding on a new velocity are perfectly elastic and inelastic collisions. A slightly more complicated strategy that offers more control would involve using the coefficient of restitution.
See also
Courant–Friedrichs–Lewy condition
Energy drift
Symplectic integrator
Leapfrog integration
Beeman's algorithm
Literature
External links
Verlet Integration Demo and Code as a Java Applet
Advanced Character Physics by Thomas Jakobsen
Theory of Molecular Dynamics Simulations – bottom of page
Verlet integration implemented in modern JavaScript – bottom of page
Numerical differential equations
Articles with example C++ code
Computational physics
Molecular dynamics | Verlet integration | [
"Physics",
"Chemistry"
] | 3,277 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
825,748 | https://en.wikipedia.org/wiki/Propylene | Propylene, also known as propene, is an unsaturated organic compound with the chemical formula . It has one double bond, and is the second simplest member of the alkene class of hydrocarbons. It is a colorless gas with a faint petroleum-like odor.
Propylene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It was discovered in 1850 by A. W. von Hoffmann's student Captain (later Major General) John Williams Reynolds as the only gaseous product of thermal decomposition of amyl alcohol to react with chlorine and bromine.
Production
Steam cracking
The dominant technology for producing propylene is steam cracking, using propane as the feedstock. Cracking propane yields a mixture of ethylene, propylene, methane, hydrogen gas, and other related compounds. The yield of propylene is about 15%. The other principal feedstock is naphtha, especially in the Middle East and Asia.
Propylene can be separated by fractional distillation from the hydrocarbon mixtures obtained from cracking and other refining processes; refinery-grade propene is about 50 to 70%. In the United States, shale gas is a major source of propane.
Olefin conversion technology
In the Phillips triolefin or olefin conversion technology, propylene is interconverted with ethylene and 2-butenes. Rhenium and molybdenum catalysts are used:
CH2=CH2{} + CH3CH=CHCH3 ->[][\text{Re, Mo} \atop \text{catalyst}] 2 CH2=CHCH3
The technology is founded on an olefin metathesis reaction discovered at Phillips Petroleum Company. Propylene yields of about 90 wt% are achieved.
Related is the Methanol-to-Olefins/Methanol-to-Propene process. It converts synthesis gas (syngas) to methanol, and then converts the methanol to ethylene and/or propene. The process produces water as a by-product. Synthesis gas is produced from the reformation of natural gas or by the steam-induced reformation of petroleum products such as naphtha, or by gasification of coal or natural gas.
Fluid catalytic cracking
High severity fluid catalytic cracking (FCC) uses traditional FCC technology under severe conditions (higher catalyst-to-oil ratios, higher steam injection rates, higher temperatures, etc.) in order to maximize the amount of propene and other light products. A high severity FCC unit is usually fed with gas oils (paraffins) and residues, and produces about 20–25% (by mass) of propene on feedstock together with greater volumes of motor gasoline and distillate byproducts. These high temperature processes are expensive and have a high carbon footprint. For these reasons, alternative routes to propylene continue to attract attention.
Other commercialized methods
On-purpose propylene production technologies were developed throughout the twentieth century. Of these, propane dehydrogenation technologies such as the CATOFIN and OLEFLEX processes have become common, although they still make up a minority of the market, with most of the olefin being sourced from the above mentioned cracking technologies. Platinum, chromia, and vanadium catalysts are common in propane dehydrogenation processes.
Market
Propene production has remained static at around 35 million tonnes (Europe and North America only) from 2000 to 2008, but it has been increasing in East Asia, most notably Singapore and China. Total world production of propene is currently about half that of ethylene.
Research
The use of engineered enzymes has been explored but has not been commercialized.
There is ongoing research into the use of oxygen carrier catalysts for the oxidative dehydrogenation of propane. This poses several advantages, as this reaction mechanism can occur at lower temperatures than conventional dehydrogenation, and may not be equilibrium-limited because oxygen is used to combust the hydrogen by-product.
Uses
Propylene is the second most important starting product in the petrochemical industry after ethylene. It is the raw material for a wide variety of products. Polypropylene manufacturers consume nearly two thirds of global production. Polypropylene end uses include films, fibers, containers, packaging, and caps and closures. Propene is also used for the production of chemicals such as propylene oxide, acrylonitrile, cumene, butyraldehyde, and acrylic acid. In the year 2013 about 85 million tonnes of propylene were processed worldwide.
Propylene and benzene are converted to acetone and phenol via the cumene process.
Propylene is also used to produce isopropyl alcohol (propan-2-ol), acrylonitrile, propylene oxide, and epichlorohydrin.
The industrial production of acrylic acid involves the catalytic partial oxidation of propylene. Propylene is an intermediate in the oxidation to acrylic acid.
In industry and workshops, propylene is used as an alternative fuel to acetylene in Oxy-fuel welding and cutting, brazing and heating of metal for the purpose of bending. It has become a standard in BernzOmatic products and others in MAPP substitutes, now that true MAPP gas is no longer available.
Reactions
Propylene resembles other alkenes in that it undergoes electrophilic addition reactions relatively easily at room temperature. The relative weakness of its double bond explains its tendency to react with substances that can achieve this transformation. Alkene reactions include:
Polymerization and oligomerization
Oxidation
Halogenation
Hydrohalogenation
Alkylation
Hydration
Hydroformylation
Complexes of transition metals
Foundational to hydroformylation, alkene metathesis, and polymerization are metal-propylene complexes, which are intermediates in these processes. Propylene is prochiral, meaning that binding of a reagent (such as a metal electrophile) to the C=C group yields one of two enantiomers.
Polymerization
The majority of propylene is used to form polypropylene, a very important commodity thermoplastic, through chain-growth polymerization. In the presence of a suitable catalyst (typically a Ziegler–Natta catalyst), propylene will polymerize. There are multiple ways to achieve this, such as using high pressures to suspending the catalyst in a solution of liquid propylene, or running gaseous propylene through a fluidized bed reactor.
Oligomerizationn
In the presence of catalysts, propylene will form various short oligomers. It can dimerizes to give 2,3-dimethyl-1-butene and/or 2,3-dimethyl-2-butene. or trimerise to form tripropylene.
Environmental safety
Propene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It is an impurity in some heating gases. Observed concentrations have been in the range of 0.1–4.8 parts per billion (ppb) in rural air, 4–10.5 ppb in urban air, and 7–260 ppb in industrial air samples.
In the United States and some European countries a threshold limit value of 500 parts per million (ppm) was established for occupational (8-hour time-weighted average) exposure. It is considered a volatile organic compound (VOC) and emissions are regulated by many governments, but it is not listed by the U.S. Environmental Protection Agency (EPA) as a hazardous air pollutant under the Clean Air Act. With a relatively short half-life, it is not expected to bioaccumulate.
Propene has low acute toxicity from inhalation and is not considered to be carcinogenic. Chronic toxicity studies in mice did not yield significant evidence suggesting adverse effects. Humans briefly exposed to 4,000 ppm did not experience any noticeable effects. Propene is dangerous from its potential to displace oxygen as an asphyxiant gas, and from its high flammability/explosion risk.
Bio-propylene is the bio-based propylene.
It has been examined, motivated by diverse interests such a carbon footprint. Production from glucose has been considered. More advanced ways of addressing such issues focus on electrification alternatives to steam cracking.
Storage and handling
Propene is flammable. Propene is usually stored as liquid under pressure, although it is also possible to store it safely as gas at ambient temperature in approved containers.
Occurrence in nature
Propene is detected in the interstellar medium through microwave spectroscopy. On September 30, 2013, NASA announced the detection of small amounts of naturally occurring propene in the atmosphere of Titan using infrared spectroscopy. The detection was made by a team led by NASA GSFC scientist Conor Nixon using data from the CIRS instrument on the Cassini orbiter spacecraft, part of the Cassini-Huygens mission. Its confirmation solved a 32-year old mystery by filling a predicted gap in Titan's detected hydrocarbons, adding the C3H6 species (propene) to the already-detected C3H4 (propyne) and C3H8 (propane).
See also
Los Alfaques disaster
Inhalant abuse
2014 Kaohsiung gas explosions
2020 Houston explosion
Titan (moon)
References
Alkenes
Monomers
Commodity chemicals
Petrochemicals
Gases
Allyl compounds | Propylene | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,984 | [
"Matter",
"Products of chemical industry",
"Petrochemicals",
"Phases of matter",
"Organic compounds",
"Alkenes",
"Polymer chemistry",
"Statistical mechanics",
"Monomers",
"Commodity chemicals",
"Gases"
] |
826,258 | https://en.wikipedia.org/wiki/Scintillation%20%28physics%29 | In condensed matter physics, scintillation ( ) is the physical process where a material, called a scintillator, emits ultraviolet or visible light under excitation from high energy photons (X-rays or gamma rays) or energetic particles (such as electrons, alpha particles, neutrons, or ions). See scintillator and scintillation counter for practical applications.
Overview
Scintillation is an example of luminescence, whereby light of a characteristic spectrum is emitted following the absorption of radiation. The scintillation process can be summarized in three main stages: conversion, transport and energy transfer to the luminescence center, and luminescence. The emitted radiation is usually less energetic than the absorbed radiation, hence scintillation is generally a down-conversion process.
Conversion processes
The first stage of scintillation, conversion, is the process where the energy from the incident radiation is absorbed by the scintillator and highly energetic electrons and holes are created in the material. The energy absorption mechanism by the scintillator depends on the type and energy of radiation involved. For highly energetic photons such as X-rays (0.1 keV < < 100 keV) and γ-rays ( > 100 keV), three types of interactions are responsible for the energy conversion process in scintillation: photoelectric absorption, Compton scattering, and pair production, which only occurs when > 1022 keV, i.e. the photon has enough energy to create an electron-positron pair.
These processes have different attenuation coefficients, which depend mainly on the energy of the incident radiation, the average atomic number of the material and the density of the material. Generally the absorption of high energy radiation is described by:
where is the intensity of the incident radiation, is the thickness of the material, and is the linear attenuation coefficient, which is the sum of the attenuation coefficients of the various contributions:
At lower X-ray energies ( 60 keV), the most dominant process is the photoelectric effect, where the photons are fully absorbed by bound electrons in the material, usually core electrons in the K- or L-shell of the atom, and then ejected, leading to the ionization of the host atom. The linear attenuation coefficient contribution for the photoelectric effect is given by:
where is the density of the scintillator, is the average atomic number, is a constant that varies between 3 and 4, and is the energy of the photon. At low X-ray energies, scintillator materials with atoms with high atomic numbers and densities are favored for more efficient absorption of the incident radiation.
At higher energies ( 60 keV) Compton scattering, the inelastic scattering of photons by bound electrons, often also leading to ionization of the host atom, becomes the more dominant conversion process. The linear attenuation coefficient contribution for Compton scattering is given by:
Unlike the photoelectric effect, the absorption resulting from Compton scattering is independent of the atomic number of the atoms present in the crystal, but linearly on their density.
At γ-ray energies higher than > 1022 keV, i.e. energies higher than twice the rest-mass energy of the electron, pair production starts to occur. Pair production is the relativistic phenomenon where the energy of a photon is converted into an electron-positron pair. The created electron and positron will then further interact with the scintillating material to generate energetic electron and holes. The attenuation coefficient contribution for pair production is given by:
where is the rest mass of the electron and is the speed of light. Hence, at high γ-ray energies, the energy absorption depends both on the density and average atomic number of the scintillator. In addition, unlike for the photoelectric effect and Compton scattering, pair production becomes more probable as the energy of the incident photons increases, and pair production becomes the most dominant conversion process above ~ 8 MeV.
The term includes other (minor) contributions, such as Rayleigh (coherent) scattering at low energies and photonuclear reactions at very high energies, which also contribute to the conversion, however the contribution from Rayleigh scattering is almost negligible and photonuclear reactions become relevant only at very high energies.
After the energy of the incident radiation is absorbed and converted into so-called hot electrons and holes in the material, these energetic charge carriers will interact with other particles and quasi-particles in the scintillator (electrons, plasmons, phonons), leading to an "avalanche event", where a great number of secondary electron–hole pairs are produced until the hot electrons and holes have lost sufficient energy. The large number of electrons and holes that result from this process will then undergo thermalization, i.e. dissipation of part of their energy through interaction with phonons in the material
The resulting large number of energetic charge carriers will then undergo further energy dissipation called thermalization. This occurs via interaction with phonons for electrons and Auger processes for holes.
The average timescale for conversion, including energy absorption and thermalization has been estimated to be in the order of 1 ps, which is much faster than the average decay time in photoluminescence.
Charge transport of excited carriers
The second stage of scintillation is the charge transport of thermalized electrons and holes towards luminescence centers and the energy transfer to the atoms involved in the luminescence process. In this stage, the large number of electrons and holes that have been generated during the conversion process, migrate inside the material. This is probably one of the most critical phases of scintillation, since it is generally in this stage where most loss of efficiency occur due to effects such as trapping or non-radiative recombination. These are mainly caused by the presence of defects in the scintillator crystal, such as impurities, ionic vacancies, and grain boundaries. The charge transport can also become a bottleneck for the timing of the scintillation process. The charge transport phase is also one of the least understood parts of scintillation and depends strongly on the type material involved and its intrinsic charge conduction properties.
Luminescence
Once the electrons and holes reach the luminescence centers, the third and final stage of scintillation occurs: luminescence. In this stage the electrons and holes are captured potential paths by the luminescent center, and then the electrons and hole recombine radiatively. The exact details of the luminescence phase also depend on the type of material used for scintillation.
Inorganic crystals
For photons such as gamma rays, thallium activated NaI crystals (NaI(Tl)) are often used. For a faster response (but only 5% of the output) CsF crystals can be used.
Organic scintillators
In organic molecules scintillation is a product of π-orbitals. Organic materials form molecular crystals where the molecules are loosely bound by Van der Waals forces. The ground state of 12C is 1s2 2s2 2p2. In valence bond theory, when carbon forms compounds, one of the 2s electrons is excited into the 2p state resulting in a configuration of 1s2 2s1 2p3. To describe the different valencies of carbon, the four valence electron orbitals, one 2s and three 2p, are considered to be mixed or hybridized in several alternative configurations. For example, in a tetrahedral configuration the s and p3 orbitals combine to produce four hybrid orbitals. In another configuration, known as trigonal configuration, one of the p-orbitals (say pz) remains unchanged and three hybrid orbitals are produced by mixing the s, px and py orbitals. The orbitals that are symmetrical about the bonding axes and plane of the molecule (sp2) are known as σ-electrons and the bonds are called σ-bonds. The pz orbital is called a π-orbital. A π-bond occurs when two π-orbitals interact. This occurs when their nodal planes are coplanar.
In certain organic molecules π-orbitals interact to produce a common nodal plane. These form delocalized π-electrons that can be excited by radiation. The de-excitation of the delocalized π-electrons results in luminescence.
The excited states of π-electron systems can be explained by the perimeter free-electron model (Platt 1949). This model is used for describing polycyclic hydrocarbons consisting of condensed systems of benzenoid rings in which no C atom belongs to more than two rings and every C atom is on the periphery.
The ring can be approximated as a circle with circumference l. The wave-function of the electron orbital must satisfy the condition of a plane rotator:
The corresponding solutions to the Schrödinger wave equation are:
where q is the orbital ring quantum number; the number of nodes of the wave-function. Since the electron can have spin up and spin down and can rotate about the circle in both directions all of the energy levels except the lowest are doubly degenerate.
The above shows the π-electronic energy levels of an organic molecule. Absorption of radiation is followed by molecular vibration to the S1 state. This is followed by a de-excitation to the S0 state called fluorescence. The population of triplet states is also possible by other means. The triplet states decay with a much longer decay time than singlet states, which results in what is called the slow component of the decay process (the fluorescence process is called the fast component). Depending on the particular energy loss of a certain particle (dE/dx), the "fast" and "slow" states are occupied in different proportions. The relative intensities in the light output of these states thus differs for different dE/dx. This property of scintillators allows for pulse shape discrimination: it is possible to identify which particle was detected by looking at the pulse shape. Of course, the difference in shape is visible in the trailing side of the pulse, since it is due to the decay of the excited states.
See also
Positron emission tomography
References
Condensed matter physics
Scattering, absorption and radiative transfer (optics) | Scintillation (physics) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,152 | [
" absorption and radiative transfer (optics)",
"Phases of matter",
"Materials science",
"Scattering",
"Condensed matter physics",
"Matter"
] |
826,956 | https://en.wikipedia.org/wiki/Clausius%E2%80%93Mossotti%20relation | In electromagnetism, the Clausius–Mossotti relation, named for O. F. Mossotti and Rudolf Clausius, expresses the dielectric constant (relative permittivity, ) of a material in terms of the atomic polarizability, , of the material's constituent atoms and/or molecules, or a homogeneous mixture thereof. It is equivalent to the Lorentz–Lorenz equation, which relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. It may be expressed as:
where
is the dielectric constant of the material, which for non-magnetic materials is equal to , where is the refractive index;
is the permittivity of free space;
is the number density of the molecules (number per cubic meter);
is the molecular polarizability in SI-units [C·m2/V].
In the case that the material consists of a mixture of two or more species, the right hand side of the above equation would consist of the sum of the molecular polarizability contribution from each species, indexed by in the following form:
In the CGS system of units the Clausius–Mossotti relation is typically rewritten to show the molecular polarizability volume which has units of volume [m3]. Confusion may arise from the practice of using the shorter name "molecular polarizability" for both and within literature intended for the respective unit system.
The Clausius–Mossotti relation assumes only an induced dipole relevant to its polarizability and is thus inapplicable for substances with a significant permanent dipole. It is applicable to gases such as and at sufficiently low densities and pressures. For example, the Clausius–Mossotti relation is accurate for N2 gas up to 1000 atm between 25 °C and 125 °C. Moreover, the Clausius–Mossotti relation may be applicable to substances if the applied electric field is at a sufficiently high frequencies such that any permanent dipole modes are inactive.
Lorentz–Lorenz equation
The Lorentz–Lorenz equation is similar to the Clausius–Mossotti relation, except that it relates the refractive index (rather than the dielectric constant) of a substance to its polarizability. The Lorentz–Lorenz equation is named after the Danish mathematician and scientist Ludvig Lorenz, who published it in 1869, and the Dutch physicist Hendrik Lorentz, who discovered it independently in 1878.
The most general form of the Lorentz–Lorenz equation is (in Gaussian-CGS units)
where is the refractive index, is the number of molecules per unit volume, and is the mean polarizability.
This equation is approximately valid for homogeneous solids as well as liquids and gases.
When the square of the refractive index is , as it is for many gases, the equation reduces to:
or simply
This applies to gases at ordinary pressures. The refractive index of the gas can then be expressed in terms of the molar refractivity as:
where is the pressure of the gas, is the universal gas constant, and is the (absolute) temperature, which together determine the number density .
References
Bibliography
Lorenz, Ludvig, "Experimentale og theoretiske Undersogelser over Legemernes Brydningsforhold", Vidensk Slsk. Sckrifter 8,205 (1870) https://www.biodiversitylibrary.org/item/48423#page/5/mode/1up
O. F. Mossotti, Discussione analitica sull'influenza che l'azione di un mezzo dielettrico ha sulla distribuzione dell'elettricità alla superficie di più corpi electrici disseminati in esso, Memorie di Mathematica e di Fisica della Società Italiana della Scienza Residente in Modena, vol. 24, p. 49-74 (1850).
Electrodynamics
Electromagnetism
Electric and magnetic fields in matter
Eponymous equations of physics | Clausius–Mossotti relation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 845 | [
"Physical phenomena",
"Electromagnetism",
"Equations of physics",
"Eponymous equations of physics",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Fundamental interactions",
"Electrodynamics",
"Dynamical systems"
] |
827,305 | https://en.wikipedia.org/wiki/Noncommutative%20quantum%20field%20theory | In mathematical physics, noncommutative quantum field theory (or quantum field theory on noncommutative spacetime) is an application of noncommutative mathematics to the spacetime of quantum field theory that is an outgrowth of noncommutative geometry and index theory in which the coordinate functions are noncommutative. One commonly studied version of such theories has the "canonical" commutation relation:
where and are the hermitian generators of a noncommutative -algebra of "functions on spacetime". That means that (with any given set of axes), it is impossible to accurately measure the position of a particle with respect to more than one axis. In fact, this leads to an uncertainty relation for the coordinates analogous to the Heisenberg uncertainty principle.
Various lower limits have been claimed for the noncommutative scale, (i.e. how accurately positions can be measured) but there is currently no experimental evidence in favour of such a theory or grounds for ruling them out.
One of the novel features of noncommutative field theories is the UV/IR mixing phenomenon in which the physics at high energies affects the physics at low energies which does not occur in quantum field theories in which the coordinates commute.
Other features include violation of Lorentz invariance due to the preferred direction of noncommutativity. Relativistic invariance can however be retained in the sense of twisted Poincaré invariance of the theory. The causality condition is modified from that of the commutative theories.
History and motivation
Heisenberg was the first to suggest extending noncommutativity to the coordinates as a possible way of removing the infinite quantities appearing in field theories before the renormalization procedure was developed and had gained acceptance. The first paper on the subject was published in 1947 by Hartland Snyder. The success of the renormalization method resulted in little attention being paid to the subject for some time. In the 1980s, mathematicians, most notably Alain Connes, developed noncommutative geometry. Among other things, this work generalized the notion of differential structure to a noncommutative setting. This led to an operator algebraic description of noncommutative space-times, with the problem that it classically corresponds to a manifold with positively defined metric tensor, so that there is no description of (noncommutative) causality in this approach. However it also led to the development of a Yang–Mills theory on a noncommutative torus.
The particle physics community became interested in the noncommutative approach because of a paper by Nathan Seiberg and Edward Witten. They argued in the context of string theory that the coordinate functions of the endpoints of open strings constrained to a D-brane in the presence of a constant Neveu–Schwarz B-field—equivalent to a constant magnetic field on the brane—would satisfy the noncommutative algebra set out above. The implication is that a quantum field theory on noncommutative spacetime can be interpreted as a low energy limit of the theory of open strings.
Two papers, one by Sergio Doplicher, Klaus Fredenhagen and John Roberts
and the other by D. V. Ahluwalia,
set out another motivation for the possible noncommutativity of space-time.
The arguments go as follows: According to general relativity, when the energy density grows sufficiently large, a black hole is formed. On the other hand, according to the Heisenberg uncertainty principle, a measurement of a space-time separation causes an uncertainty in momentum inversely proportional to the extent of the separation. Thus energy whose scale corresponds to the uncertainty in momentum is localized in the system within a region corresponding to the uncertainty in position. When the separation is small enough, the Schwarzschild radius of the system is reached and a black hole is formed, which prevents any information from escaping the system. Thus there is a lower bound for the measurement of length. A sufficient condition for preventing gravitational collapse can be expressed as an uncertainty relation for the coordinates. This relation can in turn be derived from a commutation relation for the coordinates.
It is worth stressing that, differently from other approaches, in particular those relying upon Connes' ideas, here the noncommutative spacetime is a proper spacetime, i.e. it extends the idea of a four-dimensional pseudo-Riemannian manifold. On the other hand, differently from Connes' noncommutative geometry, the proposed model turns out to be coordinate-dependent from scratch.
In Doplicher Fredenhagen Roberts' paper noncommutativity of coordinates concerns all four spacetime coordinates and not only spatial ones.
See also
Moyal product
Noncommutative geometry
Noncommutative standard model
Wigner–Weyl transform
Footnotes
Further reading
M. R. Douglas and N. A. Nekrasov, (2001). Noncommutative field theory. Rev. Mod. Phys., 73(4), 977.
Richard J. Szabo (2003) "Quantum Field Theory on Noncommutative Spaces," Physics Reports 378: 207-99. An expository article on noncommutative quantum field theories.
Noncommutative quantum field theory, see statistics on arxiv.org
Valter Moretti (2003), "Aspects of noncommutative Lorentzian geometry for globally hyperbolic spacetimes," Rev. Math. Phys. 15: 1171-1218. An expository paper (also) on the difficulties to extend non-commutative geometry to the Lorentzian case describing causality
Noncommutative geometry
Quantum field theory
Mathematical quantization | Noncommutative quantum field theory | [
"Physics"
] | 1,174 | [
"Quantum field theory",
"Mathematical quantization",
"Quantum mechanics"
] |
827,528 | https://en.wikipedia.org/wiki/Energy%20carrier | An energy carrier is a substance (fuel) or sometimes a phenomenon (energy system) that contains energy that can be later converted to other forms such as mechanical work or heat or to operate chemical or physical processes.
Such carriers include springs, electrical batteries, capacitors, pressurized air, dammed water, hydrogen, petroleum, coal, wood, and natural gas. An energy carrier does not produce energy; it simply contains energy imbued by another system.
Definition according to ISO 13600
According to ISO 13600, an energy carrier is either a substance or a phenomenon that can be used to produce mechanical work or heat or to operate chemical or physical processes. It is any system or substance that contains energy for conversion as usable energy later or somewhere else. This could be converted for use in, for example, an appliance or vehicle. Such carriers include springs, electrical batteries, capacitors, pressurized air, dammed water, hydrogen, petroleum, coal, wood, and natural gas.
ISO 13600 series (ISO 13600, ISO 13601, and ISO 13602) are intended to be used as tools to define, describe, analyse and compare technical energy systems (TES) at micro and macro levels:
ISO 13600 (Technical energy systems — Basic concepts) covers basic definitions and terms needed to define and describe TESs in general and TESs of energyware supply and demand sectors in particular.
ISO 13601 (Technical energy systems — Structure for analysis — Energyware supply and demand sectors) covers structures that shall be used to describe and analyse sub-sectors at the macro level of energyware supply and demand
ISO 13602 (all parts) facilitates the description and analysis of any technical energy systems.
Definition within the field of energetics
In the field of energetics, an energy carrier is produced by human technology from a primary energy source. Only the energy sector uses primary energy sources. Other sectors of society use an energy carrier to perform useful activities (end-uses). The distinction between "Energy Carriers" (EC) and "Primary Energy Sources" (PES) is extremely important. An energy carrier can be more valuable (have a higher quality) than a primary energy source. For example 1 megajoule (MJ) of electricity produced by a hydroelectric plant is equivalent to 3 MJ of oil. Sunlight is a main source of primary energy, which can be transformed into plants and then into coal, oil and gas. Solar power and wind power are other derivatives of sunlight. Note that although coal, oil and natural gas are derived from sunlight, they are considered primary energy sources which are extracted from the earth (fossil fuels). Natural uranium is also a primary energy source extracted from the earth but does not come from the decomposition of organisms (mineral fuel).
See also
Capital goods
Coefficient of performance
Embedded energy
Energy and society
Energy crisis
Energy pay-back
Energy resource
Energy source
Energy storage
Energyware
Entropy
Exergy
Future energy development
Hydrogen economy
ISO 14000
Liquid nitrogen economy
Lithium economy
Methanol economy
Renewable resource
Vegetable oil economy
Renewable Energy
References
Further reading
European Nuclear Society info pool/glossary: Energy carrier
Our Energy Futures glossary: Energy Carriers
Störungsdienst, Elektriker (in German)
External links
"Boron: a better energy carrier than hydrogen?" paper by Graham Cowan
ISO 13600 Technical energy systems -- Basic concepts: gives the basic concepts needed to define and describe technical energy systems.
Energy storage
Thermodynamics
Hydrogen production | Energy carrier | [
"Physics",
"Chemistry",
"Mathematics"
] | 707 | [
"Thermodynamics",
"Dynamical systems"
] |
3,588,425 | https://en.wikipedia.org/wiki/Scalar%20%28physics%29 | Scalar quantities or simply scalars are physical quantities that can be described by a single pure number (a scalar, typically a real number), accompanied by a unit of measurement, as in "10cm" (ten centimeters).
Examples of scalar quantities are length, mass, charge, volume, and time.
Scalars may represent the magnitude of physical quantities, such as speed is to velocity. Scalars do not represent a direction.
Scalars are unaffected by changes to a vector space basis (i.e., a coordinate rotation) but may be affected by translations (as in relative speed).
A change of a vector space basis changes the description of a vector in terms of the basis used but does not change the vector itself, while a scalar has nothing to do with this change. In classical physics, like Newtonian mechanics, rotations and reflections preserve scalars, while in relativity, Lorentz transformations or space-time translations preserve scalars. The term "scalar" has origin in the multiplication of vectors by a unitless scalar, which is a uniform scaling transformation.
Relationship with the mathematical concept
A scalar in physics and other areas of science is also a scalar in mathematics, as an element of a mathematical field used to define a vector space. For example, the magnitude (or length) of an electric field vector is calculated as the square root of its absolute square (the inner product of the electric field with itself); so, the inner product's result is an element of the mathematical field for the vector space in which the electric field is described. As the vector space in this example and usual cases in physics is defined over the mathematical field of real numbers or complex numbers, the magnitude is also an element of the field, so it is mathematically a scalar. Since the inner product is independent of any vector space basis, the electric field magnitude is also physically a scalar.
The mass of an object is unaffected by a change of vector space basis so it is also a physical scalar, described by a real number as an element of the real number field. Since a field is a vector space with addition defined based on vector addition and multiplication defined as scalar multiplication, the mass is also a mathematical scalar.
Scalar field
Since scalars mostly may be treated as special cases of multi-dimensional quantities such as vectors and tensors, physical scalar fields might be regarded as a special case of more general fields, like vector fields, spinor fields, and tensor fields.
Units
Like other physical quantities, a physical quantity of scalar is also typically expressed by a numerical value and a physical unit, not merely a number, to provide its physical meaning. It may be regarded as the product of the number and the unit (e.g., 1 km as a physical distance is the same as 1,000 m). A physical distance does not depend on the length of each base vector of the coordinate system where the base vector length corresponds to the physical distance unit in use. (E.g., 1 m base vector length means the meter unit is used.) A physical distance differs from a metric in the sense that it is not just a real number while the metric is calculated to a real number, but the metric can be converted to the physical distance by converting each base vector length to the corresponding physical unit.
Any change of a coordinate system may affect the formula for computing scalars (for example, the Euclidean formula for distance in terms of coordinates relies on the basis being orthonormal), but not the scalars themselves. Vectors themselves also do not change by a change of a coordinate system, but their descriptions changes (e.g., a change of numbers representing a position vector by rotating a coordinate system in use).
Classical scalars
An example of a scalar quantity is temperature: the temperature at a given point is a single number. Velocity, on the other hand, is a vector quantity.
Other examples of scalar quantities are mass, charge, volume, time, speed, pressure, and electric potential at a point inside a medium. The distance between two points in three-dimensional space is a scalar, but the direction from one of those points to the other is not, since describing a direction requires two physical quantities such as the angle on the horizontal plane and the angle away from that plane. Force cannot be described using a scalar, since force has both direction and magnitude; however, the magnitude of a force alone can be described with a scalar, for instance the gravitational force acting on a particle is not a scalar, but its magnitude is. The speed of an object is a scalar (e.g., 180 km/h), while its velocity is not (e.g. a velocity of 180 km/h in a roughly northwest direction might consist of 108 km/h northward and 144 km/h westward).
Some other examples of scalar quantities in Newtonian mechanics are electric charge and charge density.
Relativistic scalars
In the theory of relativity, one considers changes of coordinate systems that trade space for time. As a consequence, several physical quantities that are scalars in "classical" (non-relativistic) physics need to be combined with other quantities and treated as four-vectors or tensors. For example, the charge density at a point in a medium, which is a scalar in classical physics, must be combined with the local current density (a 3-vector) to comprise a relativistic 4-vector. Similarly, energy density must be combined with momentum density and pressure into the stress–energy tensor.
Examples of scalar quantities in relativity include electric charge, spacetime interval (e.g., proper time and proper length), and invariant mass.
Pseudoscalar
See also
Invariant (physics)
Relative scalar
Scalar (mathematics)
Notes
References
External links | Scalar (physics) | [
"Physics",
"Mathematics"
] | 1,202 | [
"Scalar physical quantities",
"Quantity",
"Physical quantities"
] |
3,588,836 | https://en.wikipedia.org/wiki/Hardness | In materials science, hardness (antonym: softness) is a measure of the resistance to plastic deformation, such as an indentation (over an area) or a scratch (linear), induced mechanically either by pressing or abrasion. In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin, or wood and common plastics. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can be contrasted with soft matter.
Measures
There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another.
Scratch hardness
Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object. The principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate. The most common test is Mohs scale, which is used in mineralogy. One tool to make this measurement is the sclerometer.
Another tool used to make these tests is the pocket hardness tester. This tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to use it a weight of known mass is added to the scale arm at one of the graduated markings, the tool is then drawn across the test surface. The use of the weight and markings allows a known pressure to be applied without the need for complicated machinery.
Indentation hardness
Indentation hardness measures the resistance of a sample to material deformation due to a constant compression load from a sharp object. Tests for indentation hardness are primarily used in engineering and metallurgy. The tests work on the basic premise of measuring the critical dimensions of an indentation left by a specifically dimensioned and loaded indenter. Common indentation hardness scales are Rockwell, Vickers, Shore, and Brinell, amongst others.
Rebound hardness
Rebound hardness, also known as dynamic hardness, measures the height of the "bounce" of a diamond-tipped hammer dropped from a fixed height onto a material. This type of hardness is related to elasticity. The device used to take this measurement is known as a scleroscope. Two scales that measures rebound hardness are the Leeb rebound hardness test and Bennett hardness scale. Ultrasonic Contact Impedance (UCI) method determines hardness by measuring the frequency of an oscillating rod. The rod consists of a metal shaft with vibrating element and a pyramid-shaped diamond mounted on one end.
Hardening
There are five hardening processes: Hall-Petch strengthening, work hardening, solid solution strengthening, precipitation hardening, and martensitic transformation.
In solid mechanics
In solid mechanics, solids generally have three responses to force, depending on the amount of force and the type of material:
They exhibit elasticity—the ability to temporarily change shape, but return to the original shape when the pressure is removed. "Hardness" in the elastic range—a small temporary change in shape for a given force—is known as stiffness in the case of a given object, or a high elastic modulus in the case of a material.
They exhibit plasticity—the ability to permanently change shape in response to the force, but remain in one piece. The yield strength is the point at which elastic deformation gives way to plastic deformation. Deformation in the plastic range is non-linear, and is described by the stress-strain curve. This response produces the observed properties of scratch and indentation hardness, as described and measured in materials science. Some materials exhibit both elasticity and viscosity when undergoing plastic deformation; this is called viscoelasticity.
They fracture—split into two or more pieces.
Strength is a measure of the extent of a material's elastic range, or elastic and plastic ranges together. This is quantified as compressive strength, shear strength, tensile strength depending on the direction of the forces involved. Ultimate strength is an engineering measure of the maximum load a part of a specific material and geometry can withstand.
Brittleness, in technical usage, is the tendency of a material to fracture with very little or no detectable plastic deformation beforehand. Thus in technical terms, a material can be both brittle and strong. In everyday usage "brittleness" usually refers to the tendency to fracture under a small amount of force, which exhibits both brittleness and a lack of strength (in the technical sense). For perfectly brittle materials, yield strength and ultimate strength are the same, because they do not experience detectable plastic deformation. The opposite of brittleness is ductility.
The toughness of a material is the maximum amount of energy it can absorb before fracturing, which is different from the amount of force that can be applied. Toughness tends to be small for brittle materials, because elastic and plastic deformations allow materials to absorb large amounts of energy.
Hardness increases with decreasing particle size. This is known as the Hall-Petch relationship. However, below a critical grain-size, hardness decreases with decreasing grain size. This is known as the inverse Hall-Petch effect.
Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus. Stiffness is often confused for hardness. Some materials are stiffer than diamond (e.g. osmium) but are not harder, and are prone to spalling and flaking in squamose or acicular habits.
Mechanisms and theory
The key to understanding the mechanism behind hardness is understanding the metallic microstructure, or the structure and arrangement of the atoms at the atomic level. In fact, most important metallic properties critical to the manufacturing of today’s goods are determined by the microstructure of a material. At the atomic level, the atoms in a metal are arranged in an orderly three-dimensional array called a crystal lattice. In reality, however, a given specimen of a metal likely never contains a consistent single crystal lattice. A given sample of metal will contain many grains, with each grain having a fairly consistent array pattern. At an even smaller scale, each grain contains irregularities.
There are two types of irregularities at the grain level of the microstructure that are responsible for the hardness of the material. These irregularities are point defects and line defects. A point defect is an irregularity located at a single lattice site inside of the overall three-dimensional lattice of the grain. There are three main point defects. If there is an atom missing from the array, a vacancy defect is formed. If there is a different type of atom at the lattice site that should normally be occupied by a metal atom, a substitutional defect is formed. If there exists an atom in a site where there should normally not be, an interstitial defect is formed. This is possible because space exists between atoms in a crystal lattice. While point defects are irregularities at a single site in the crystal lattice, line defects are irregularities on a plane of atoms. Dislocations are a type of line defect involving the misalignment of these planes. In the case of an edge dislocation, a half plane of atoms is wedged between two planes of atoms. In the case of a screw dislocation two planes of atoms are offset with a helical array running between them.
In glasses, hardness seems to depend linearly on the number of topological constraints acting between the atoms of the network. Hence, the rigidity theory has allowed predicting hardness values with respect to composition.
Dislocations provide a mechanism for planes of atoms to slip and thus a method for plastic or permanent deformation. Planes of atoms can flip from one side of the dislocation to the other effectively allowing the dislocation to traverse through the material and the material to deform permanently. The movement allowed by these dislocations causes a decrease in the material's hardness.
The way to inhibit the movement of planes of atoms, and thus make them harder, involves the interaction of dislocations with each other and interstitial atoms. When a dislocation intersects with a second dislocation, it can no longer traverse through the crystal lattice. The intersection of dislocations creates an anchor point and does not allow the planes of atoms to continue to slip over one another A dislocation can also be anchored by the interaction with interstitial atoms. If a dislocation comes in contact with two or more interstitial atoms, the slip of the planes will again be disrupted. The interstitial atoms create anchor points, or pinning points, in the same manner as intersecting dislocations.
By varying the presence of interstitial atoms and the density of dislocations, a particular metal's hardness can be controlled. Although seemingly counter-intuitive, as the density of dislocations increases, there are more intersections created and consequently more anchor points. Similarly, as more interstitial atoms are added, more pinning points that impede the movements of dislocations are formed. As a result, the more anchor points added, the harder the material will become.
Relation between hardness number and stress-strain curve
Careful note should be taken of the relationship between a hardness number and the stress-strain curve exhibited by the material. The latter, which is conventionally obtained via tensile testing, captures the full plasticity response of the material (which is in most cases a metal). It is in fact a dependence of the (true) von Mises plastic strain on the (true) von Mises stress, but this is readily obtained from a nominal stress – nominal strain curve (in the pre-necking regime), which is the immediate outcome of a tensile test. This relationship can be used to describe how the material will respond to almost any loading situation, often by using the Finite Element Method (FEM). This applies to the outcome of an indentation test (with a given size and shape of indenter, and a given applied load).
However, while a hardness number thus depends on the stress-strain relationship, inferring the latter from the former is far from simple and is not attempted in any rigorous way during conventional hardness testing. (In fact, the Indentation Plastometry technique, which involves iterative FEM modelling of an indentation test, does allow a stress-strain curve to be obtained via indentation, but this is outside the scope of conventional hardness testing.) A hardness number is just a semi-quantitative indicator of the resistance to plastic deformation. Although hardness is defined in a similar way for most types of test – usually as the load divided by the contact area – the numbers obtained for a particular material are different for different types of test, and even for the same test with different applied loads. Attempts are sometimes made to identify simple analytical expressions that allow features of the stress-strain curve, particularly the yield stress and Ultimate Tensile Stress (UTS), to be obtained from a particular type of hardness number. However, these are all based on empirical correlations, often specific to particular types of alloy: even with such a limitation, the values obtained are often quite unreliable. The underlying problem is that metals with a range of combinations of yield stress and work hardening characteristics can exhibit the same hardness number. The use of hardness numbers for any quantitative purpose should, at best, be approached with considerable caution.
See also
Related properties
Hot hardness
Hardness comparison
Hardness of ceramics
Toughness
Other strengthening mechanisms
Grain boundary strengthening
Precipitation hardening
Solid solution strengthening
Work hardening
Hardness scales, tools and tests
Leeb rebound hardness test
Tablet hardness testing
Persoz pendulum
Roll hardness tester
Schmidt hammer
Janka hardness test
Nanoindentation
Barcol hardness test
References
Further reading
Davis, J. R. (Ed.). (2002). Surface hardening of steels: Understanding the basics. Materials Park, OH: ASM International.
Dieter, George E. (1989). Mechanical Metallurgy. SI Metric Adaptation. Maidenhead, UK: McGraw-Hill Education.
Revankar, G. (2003). "Introduction to hardness testing." Mechanical testing and evaluation, ASM Online Vol. 8.
External links
An introduction to materials hardness
Guidelines to hardness testing
Testing the Hardness of Metals
Condensed matter physics
Matter
Solid mechanics
Materials science
Hardness tests
Physical properties | Hardness | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,665 | [
"Physical phenomena",
"Solid mechanics",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Materials testing",
"Mechanics",
"Condensed matter physics",
"nan",
"Hardness tests",
"Physical properties",
"Matter"
] |
3,590,121 | https://en.wikipedia.org/wiki/Aliquat%20336 | Aliquat 336 (Starks' catalyst) is a quaternary ammonium salt used as a phase transfer catalyst and metal extraction reagent. It contains a mixture of C8 (octyl) and C10 (decyl) chains with C8 predominating. It is an ionic liquid.
Applications
Organic Chemistry
Aliquat 336 is used as a phase transfer catalyst, including in the catalytic oxidation of cyclohexene to 1,6-hexanedioic acid. This reaction is an example of green chemistry, as it is more environmentally friendly than the traditional method of oxidizing cyclohexanol or cyclohexanone with nitric acid or potassium permanganate, which produce hazardous wastes.
Aliquat 336 was used in the total synthesis of manzamine A by Darren Dixon.
Solvent extraction of metals
Aliquat 336 has been used for the extraction of metals by acting as a liquid anion exchanger. It is commonly used as a solution in hydrocarbon solvents such as aromatic kerosene. Aliphatic kerosene can also be used, but requires the addition of a phase modifier (typically a long chain alcohol) to prevent the formation of third phase.
Waste treatment
Several applications have been successfully carried out with Aliquat 336, such as the recovery of acids or acid salts, or the removal of certain metals from wastewater. In addition, foaming has also been controlled by using this agent during the treatment of wastewater containing anionic surfactants.
References
Quaternary ammonium compounds
Chlorides
Catalysts
Ionic liquids | Aliquat 336 | [
"Chemistry"
] | 328 | [
"Catalysis",
"Catalysts",
"Chlorides",
"Inorganic compounds",
"Salts",
"Chemical kinetics"
] |
3,591,456 | https://en.wikipedia.org/wiki/Interface%20%28matter%29 | In the physical sciences, an interface is the boundary between two spatial regions occupied by different matter, or by matter in different physical states. The interface between matter and air, or matter and vacuum, is called a surface, and studied in surface science. In thermal equilibrium, the regions in contact are called phases, and the interface is called a phase boundary. An example for an interface out of equilibrium is the grain boundary in polycrystalline matter.
The importance of the interface depends on the type of system: the bigger the quotient area/volume, the greater the effect the interface will have. Consequently, interfaces are very important in systems with large interface area-to-volume ratios, such as colloids.
Interfaces can be flat or curved. For example, oil droplets in a salad dressing are spherical but the interface between water and air in a glass of water is mostly flat.
Surface tension is the physical property which rules interface processes involving liquids. For a liquid film on flat surfaces, the liquid-vapor interface keeps flat to minimize interfacial area and system free energy. For a liquid film on rough surfaces, the surface tension tends to keep the meniscus flat, while the disjoining pressure makes the film conformal to the substrate. The equilibrium meniscus shape is a result of the competition between the capillary pressure and disjoining pressure.
Interfaces may cause various optical phenomena, such as refraction. Optical lenses serve as an example of a practical application of the interface between glass and air.
One topical interface system is the gas-liquid interface between aerosols and other atmospheric molecules.
See also
Capillary surface, a surface that represents the boundary between two fluids
Disjoining pressure
Free surface
Interface and colloid science
Membrane (disambiguation)
Surface phenomenon
References
Colloidal chemistry
Matter
Surface science | Interface (matter) | [
"Physics",
"Chemistry",
"Materials_science"
] | 377 | [
"Colloidal chemistry",
"Colloids",
"Surface science",
"Condensed matter physics",
"Matter"
] |
3,592,504 | https://en.wikipedia.org/wiki/Buck%E2%80%93boost%20converter | The buck–boost converter is a type of DC-to-DC converter that has an output voltage magnitude that is either greater than or less than the input voltage magnitude. It is equivalent to a flyback converter using a single inductor instead of a transformer. Two different topologies are called buck–boost converter. Both of them can produce a range of output voltages, ranging from much larger (in absolute magnitude) than the input voltage, down to almost zero.
In the inverting topology, the output voltage is of the opposite polarity than the input. This is a switched-mode power supply with a similar circuit configuration to the boost converter and the buck converter. The output voltage is adjustable based on the duty cycle of the switching transistor. One possible drawback of this converter is that the switch does not have a terminal at ground; this complicates the driving circuitry. However, this drawback is of no consequence if the power supply is isolated from the load circuit (if, for example, the supply is a battery) because the supply and diode polarity can simply be reversed. When they can be reversed, the switch can be placed either on the ground side or the supply side.
When a buck (step-down) converter is combined with a boost (step-up) converter, the output voltage is typically of the same polarity of the input, and can be lower or higher than the input. Such a non-inverting buck-boost converter may use a single inductor which is used for both the buck inductor mode and the boost inductor mode, using switches instead of diodes, sometimes called a "four-switch buck-boost converter", it may use multiple inductors but only a single switch as in the SEPIC and Ćuk topologies.
Principle of operation of the inverting topology
The basic principle of the inverting buck–boost converter is fairly simple (see figure 2):
while in the On-state, the input voltage source is directly connected to the inductor (L). This results in accumulating energy in L. In this stage, the capacitor supplies energy to the output load.
while in the Off-state, the inductor is connected to the output load and capacitor, so energy is transferred from L to C and R.
Compared to the buck and boost converters, the characteristics of the inverting buck–boost converter are mainly:
polarity of the output voltage is opposite to that of the input;
the output voltage can vary continuously from 0 to (for an ideal converter). The output voltage ranges for a buck and a boost converter are respectively to 0 and to .
Conceptual overview
Like the buck and boost converters, the operation of the buck-boost is best understood in terms of the inductor's "reluctance" to allow rapid change in current. From the initial state in which nothing is charged and the switch is open, the current through the inductor is zero. When the switch is first closed, the blocking diode prevents current from flowing into the right hand side of the circuit, so it must all flow through the inductor. However, since the inductor doesn't allow rapid current change, it will initially keep the current low by opposing the voltage provided by the source.
Over time, the inductor will allow the current to slowly increase. In an ideal circuit the voltage across the inductor would remain constant, but when the inherent resistance of wiring, switch and the inductor itself is taken into account, the effective (electro-motive) voltage across the inductor will decrease as the current increases. Also during this time, the inductor will store energy in the form of a magnetic field.
Continuous mode
If the current through the inductor L never falls to zero during a commutation cycle, the converter is said to operate in continuous mode. The current and voltage waveforms in an ideal converter can be seen in Figure 3.
From to , the converter is in On-State, so the switch S is closed. The rate of change in the inductor current (IL) is therefore given by
At the end of the On-state, the increase of IL is therefore:
D is the duty cycle. It represents the fraction of the commutation period T during which the switch is On. Therefore D ranges between 0 (S is never on) and 1 (S is always on).
During the Off-state, the switch S is open, so the inductor current flows through the load. If we assume zero voltage drop in the diode, and a capacitor large enough for its voltage to remain constant, the evolution of IL is:
Therefore, the variation of IL during the Off-period is:
As we consider that the converter operates in steady-state conditions, the amount of energy stored in each of its components has to be the same at the beginning and at the end of a commutation cycle. As the energy in an inductor is given by:
it is obvious that the value of IL at the end of the Off state must be the same with the value of IL at the beginning of the On-state, i.e. the sum of the variations of IL during the on and the off states must be zero:
Substituting and by their expressions yields:
This can be written as:
This in return yields that:
From the above expression it can be seen that the polarity of the output voltage is always negative (because the duty cycle goes from 0 to 1), and that its absolute value increases with D, theoretically up to minus infinity when D approaches 1. Apart from the polarity, this converter is either step-up (a boost converter) or step-down (a buck converter). Thus it is named a buck–boost converter.
Discontinuous mode
In some cases, the amount of energy required by the load is small enough to be transferred in a time smaller than the whole commutation period. In this case, the current through the inductor falls to zero during part of the period. The only difference in the principle described above is that the inductor is completely discharged at the end of the commutation cycle (see waveforms in figure 4). Although slight, the difference has a strong effect on the output voltage equation. It can be calculated as follows:
Because the inductor current at the beginning of the cycle is zero, its maximum value (at ) is
During the off-period, IL falls to zero after δ.T:
Using the two previous equations, δ is:
The load current is equal to the average diode current (). As can be seen on figure 4, the diode current is equal to the inductor current during the off-state. Therefore, the output current can be written as:
Replacing and δ by their respective expressions yields:
Therefore, the output voltage gain can be written as:
Compared to the expression of the output voltage gain for the continuous mode, this expression is much more complicated. Furthermore, in discontinuous operation, the output voltage not only depends on the duty cycle, but also on the inductor value, the input voltage and the output current.
Limit between continuous and discontinuous modes
As told at the beginning of this section, the converter operates in discontinuous mode when low current is drawn by the load, and in continuous mode at higher load current levels. The limit between discontinuous and continuous modes is reached when the inductor current falls to zero exactly at the end of the commutation cycle. with the notations of figure 4, this corresponds to :
In this case, the output current (output current at the limit between continuous and discontinuous modes) is given by:
Replacing by the expression given in the discontinuous mode section yields:
As is the current at the limit between continuous and discontinuous modes of operations, it satisfies the expressions of both modes. Therefore, using the expression of the output voltage in continuous mode, the previous expression can be written as:
Let's now introduce two more notations:
the normalized voltage, defined by . It corresponds to the gain in voltage of the converter;
the normalized current, defined by . The term is equal to the maximum increase of the inductor current during a cycle; i.e., the increase of the inductor current with a duty cycle D=1. So, in steady state operation of the converter, this means that equals 0 for no output current, and 1 for the maximum current the converter can deliver.
Using these notations, we have:
in continuous mode, ;
in discontinuous mode, ;
the current at the limit between continuous and discontinuous mode is . Therefore, the locus of the limit between continuous and discontinuous modes is given by .
These expressions have been plotted in figure 5. The difference in behavior between the continuous and discontinuous modes can be seen clearly.
Principles of operation of the four-switch topology
The four-switch converter combines the buck and boost converters. It can operate in either the buck or the boost mode. In either mode, only one switch controls the duty cycle, another is for commutation and must be operated inversely to the former one, and the remaining two switches are in a fixed position. A two-switch buck-boost converter can be built with two diodes, but upgrading the diodes to FET switches doesn't cost much extra while efficiency improves due to the lower voltage drop.
Non-ideal circuit
Effect of parasitic resistances
In the analysis above, no dissipative elements (resistors) have been considered. That means that the power is transmitted without losses from the input voltage source to the load. However, parasitic resistances exist in all circuits, due to the resistivity of the materials they are made from. Therefore, a fraction of the power managed by the converter is dissipated by these parasitic resistances.
For the sake of simplicity, we consider here that the inductor is the only non-ideal component, and that it is equivalent to an inductor and a resistor in series. This assumption is acceptable because an inductor is made of one long wound piece of wire, so it is likely to exhibit a non-negligible parasitic resistance (RL). Furthermore, current flows through the inductor both in the on and the off states.
Using the state-space averaging method, we can write:
where and are respectively the average voltage across the inductor and the switch over the commutation cycle. If we consider that the converter operates in steady-state, the average current through the inductor is constant. The average voltage across the inductor is:
When the switch is in the on-state, . When it is off, the diode is forward biased (we consider the continuous mode operation), therefore . Therefore, the average voltage across the switch is:
The output current is the opposite of the inductor current during the off-state. the average inductor current is therefore:
Assuming the output current and voltage have negligible ripple, the load of the converter can be considered purely resistive. If R is the resistance of the load, the above expression becomes:
Using the previous equations, the input voltage becomes:
This can be written as:
If the inductor resistance is zero, the equation above becomes equal to the one of the ideal case. But when RL increases, the voltage gain of the converter decreases compared to the ideal case. Furthermore, the influence of RL increases with the duty cycle. This is summarized in figure 6.
See also
Ćuk converter
Flyback converter
SEPIC converter
Split-pi topology
References
Further reading
Daniel W. Hart, "Introduction to Power Electronics", Prentice Hall, Upper Saddle River, New Jersey USA, 1997
Christophe Basso, Switch-Mode Power Supplies: SPICE Simulations and Practical Designs. McGraw-Hill. .
Frede Blaabjerg, Analysis, control and design of a non-inverting buck-boost converter: A bump-less two-level T–S fuzzy PI control. ISA Transactions. .
Leonardo Callegaro, et al., "A Simple Smooth Transition Technique for the Noninverting Buck–Boost Converter". IEEE Transactions on Power Electronics, Vol. 33 (6), June 2018.
Choppers
Voltage regulation | Buck–boost converter | [
"Physics"
] | 2,602 | [
"Voltage",
"Physical quantities",
"Voltage regulation"
] |
3,593,309 | https://en.wikipedia.org/wiki/Relay%20channel | In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
General discrete-time memoryless relay channel
A discrete memoryless single-relay channel can be modelled as four finite sets, and , and a conditional probability distribution on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by .
<nowiki>
o------------------o
| Relay Encoder |
o------------------o
Λ |
| y1 x2 |
| V
o---------o x1 o------------------o y o---------o
| Encoder |--->| p(y,y1|x1,x2) |--->| Decoder |
o---------o o------------------o o---------o
</nowiki>
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
Decode-and-Forward (DF): In this relaying scheme, the relay decodes the source message in one block and transmits the re-encoded message in the following block. The achievable rate of DF is known as .
Compress-and-Forward (CF): In this relaying scheme, the relay quantizes the received signal in one block and transmits the encoded version of the quantized received signal in the following block. The achievable rate of CF is known as subject to .
Amplify-and-Forward (AF): In this relaying scheme, the relay sends an amplified version of the received signal in the last time-slot. Comparing with DF and CF, AF requires much less delay as the relay node operates time-slot by time-slot. Also, AF requires much less computing power as no decoding or quantizing operation is performed at the relay side.
Cut-set upper bound
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gamal and is known as the Cut-set upper bound. This bound says where C is the capacity of the relay channel. The first term and second term in the minimization above are called broadcast bound and multi-access bound, respectively.
Degraded relay channel
A relay channel is said to be degraded if y depends on only through and , i.e., . In the article by Cover and El-Gamal it is shown that the capacity of the degraded relay channel can be achieved using Decode-and-Forward scheme. It turns out that the capacity in this case is equal to the Cut-set upper bound.
Reversely degraded relay channel
A relay channel is said to be reversely degraded if . Cover and El-Gamal proved that the Direct Transmission Lower Bound (wherein relay is not used) is tight when the relay channel is reversely degraded.
Feedback relay channel
Relay without delay channel
In a relay-without-delay channel (RWD), each transmitted relay symbol can depend on relay's past as well as present received symbols. Relay Without Delay was shown to achieve rates that are outside the Cut-set upper bound. Recently, it was also shown that instantaneous relays (a special case of relay-without-delay) are capable of improving not only the capacity, but also Degrees of Freedom (DoF) of the 2-user interference channel.
See also
Cooperative diversity
Relay (disambiguation)
References
Thomas M. Cover and Abbas El Gamal, "Capacity theorems for the relay channel," IEEE Transactions on Information Theory (1979), pp. 572–584
External links
Many resources on the Relay Channel and Cooperative Communications are available at
Information theory
Telecommunication theory | Relay channel | [
"Mathematics",
"Technology",
"Engineering"
] | 891 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
3,593,667 | https://en.wikipedia.org/wiki/Atomic%20layer%20epitaxy | Atomic layer epitaxy (ALE), more generally known as atomic layer deposition (ALD), is a specialized form of thin film growth (epitaxy) that typically deposit alternating monolayers of two elements onto a substrate. The crystal lattice structure achieved is thin, uniform, and aligned with the structure of the substrate. The reactants are brought to the substrate as alternating pulses with "dead" times in between. ALE makes use of the fact that the incoming material is bound strongly until all sites available for chemisorption are occupied. The dead times are used to flush the excess material.
It is mostly used in semiconductor fabrication to grow thin films of thickness in the nanometer scale.
Technique
This technique was invented in 1974 and patented the same year (patent published in 1976) by Dr. Tuomo Suntola at the Instrumentarium company, Finland. Dr. Suntola's purpose was to grow thin films of Zinc sulfide to fabricate electroluminescent flat panel displays. The main trick used for this technique is the use of a self-limiting chemical reaction to control in an accurate way the thickness of the film deposited. Since the early days, ALE (ALD) has grown to a global thin film technology which has enabled the continuation of Moore's law. In 2018, Suntola received the Millennium Technology Prize for ALE (ALD) technology.
Compared to basic chemical vapour deposition, in ALE (ALD), chemical reactants are pulsed alternatively in a reaction chamber and then chemisorb in a saturating manner on the surface of the substrate, forming a chemisorbed monolayer.
ALD introduces two complementary precursors (e.g. Al(CH3)3 and H2O ) alternatively into the reaction chamber. Typically, one of the precursors will adsorb onto the substrate surface until it saturates the surface and further growth cannot occur until the second precursor is introduced. Thus the film thickness is controlled by the number of precursor cycles rather than the deposition time as is the case for conventional CVD processes. ALD allows for extremely precise control of film thickness and uniformity.
See also
Atomic layer deposition
References
External links
Plasma-assisted Atomic Layer Deposition by the Plasma & Materials Processing group at Eindhoven University of Technology
Atomic layer epitaxy – a valuable tool for nanotechnology?
ALENET – Atomic Layer Epitaxy Network
Surface smoothing of GaAs microstructure by atomic layer epitaxy
Electrochemical characterisation of atomic layer deposition
Thin film deposition
Finnish inventions | Atomic layer epitaxy | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 519 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Planes (geometry)",
"Solid state engineering"
] |
3,593,867 | https://en.wikipedia.org/wiki/Chemical%20beam%20epitaxy | Chemical beam epitaxy (CBE) forms an important class of deposition techniques for semiconductor layer systems, especially III-V semiconductor systems. This form of epitaxial growth is performed in an ultrahigh vacuum system. The reactants are in the form of molecular beams of reactive gases, typically as the hydride or a metalorganic. The term CBE is often used interchangeably with metal-organic molecular beam epitaxy (MOMBE). The nomenclature does differentiate between the two (slightly different) processes, however. When used in the strictest sense, CBE refers to the technique in which both components are obtained from gaseous sources, while MOMBE refers to the technique in which the group III component is obtained from a gaseous source and the group V component from a solid source.
Basic principles
Chemical beam epitaxy was first demonstrated by W.T. Tsang in 1984. This technique was then described as a hybrid of metal-organic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) that exploited the advantages of both the techniques. In this initial work, InP and GaAs were grown using gaseous group III and V alkyls. While group III elements were derived from the pyrolysis of the alkyls on the surface, the group V elements were obtained from the decomposition of the alkyls by bringing in contact with heated Tantalum (Ta) or Molybdenum (Mo) at 950-1200 °C.
Typical pressure in the gas reactor is between 102 Torr and 1 atm for MOCVD. Here, the transport of gas occurs by viscous flow and chemicals reach the surface by diffusion. In contrast, gas pressures of less than 10−4 Torr are used in CBE. The gas transport now occurs as molecular beam due to the much longer mean-free paths, and the process evolves to a chemical beam deposition. It is also worth noting here that MBE employs atomic beams (such as aluminium (Al) and Gallium (Ga)) and molecular beams (such as As4 and P4) that are evaporated at high temperatures from solid elemental sources, while the sources for CBE are in vapor phase at room temperatures. A comparison of the different processes in the growth chamber for MOCVD, MBE and CBE can be seen in figure 1.
Experimental setup
A combination of turbomolecular and cryo pumps are used in standard UHV growth chambers. The chamber itself is equipped with a liquid nitrogen cryoshield and a rotatable crystal holder capable of carrying more than one wafer. The crystal holder is usually heated from the backside to temperatures of 500 to 700 °C. Most setups also have RHEED equipment for the in-situ monitoring of surface superstructures on the growing surface and for measuring growth rates, and mass spectrometers for the analysis of the molecular species in the beams and the analysis of the residual gases.
The gas inlet system, which is one of the most important components of the system, controls the material beam flux. Pressure controlled systems are used most commonly. The material flux is controlled by the input pressure of the gas injection capillary. The pressure inside the chamber can be measured and controlled by a capacitance manometer. The molecular beams of gaseous source materials injectors or effusion jets that ensure a homogeneous beam profile. For some starting compounds, such as the hydrides that are the group V starting material, the hydrides have to be precracked into the injector. This is usually done by thermally decomposing with a heated metal or filament.
Growth kinetics
In order to better understand the growth kinetics associated with CBE, it is important to look at physical and chemical processes associated with MBE and MOCVD as well. Figure 2 depicts those. The growth kinetics for these three techniques differ in many ways. In conventional MBE, the growth rate is determined by the arrival rate of the group III atomic beams. The epitaxial growth takes place as the group III atoms impinge on the heated substrate surface, migrates into the appropriate lattice sites and then deposits near excess group V dimers or tetramers. It is worth noting that no chemical reaction is involved at the surface since the atoms are generated by thermal evaporation from solid elemental sources.
In MOCVD, group III alkyls are already partially dissociated in the gas stream. These diffuse through a stagnant boundary layer that exists over the heated substrate, after which they dissociate into the atomic group III elements. These atoms then migrate to the appropriate lattice site and deposit epitaxially by associating with a group V atom that was derived from the thermal decomposition of the hydrides. The growth rate here is usually limited by the diffusion rate of the group III alkyls through the boundary layer. Gas phase reactions between the reactants have also been observed in this process.
In CBE processes, the hydrides are cracked in a high temperature injector before they reach the substrate. The temperatures are typically 100-150 °C lower than they are in a similar MOCVD or MOVPE. There is also no boundary layer (such as the one in MOCVD) and molecular collisions are minimal due to the low pressure. The group V alkyls are usually supplied in excess, and the group III alkyl molecules impinge directly onto the heated substrate as in conventional MBE. The group III alkyl molecule has two options when this happens. The first option is to dissociate its three alkyl radicals by acquiring thermal energy from the surface, and leaving behind the elemental group III atoms on the surface. The second option is to re-evaporate partially or completely undissociated. Thus, the growth rate is determined by the arrival rate of the group III alkyls at a higher substrate temperature, and by the surface pyrolysis rate at lower temperatures.
Compatibility with device fabrication
Selective growth at low temperatures
Selective growth through dielectric masking is readily achieved using CBE as compared to its parent techniques of MBE and MOCVD. Selective growth is hard to achieve using elemental source MBE because group III atoms do not desorb readily after they are adsorbed. With chemical sources, the reactions associated with the growth rate are faster on the semiconductor surface than on the dielectric layer. No group III element can, however, arrive at the dielectric surface in CBE due to the absence of any gas phase reactions. Also, it is easier for the impinging group III metalorganic molecules to desorb in the absence of the boundary layer. This makes it easier to perform selective epitaxy using CBE and at lower temperatures, compared to MOCVD or MOVPE.
In recent developments patented by ABCD Technology, substrate rotation is no longer required, leading to new possibilities such as in-situ patterning with particle beams. This possibility opens very interesting perspectives to achieve patterned thin films in a single step, in particular for materials that are difficult to etch such as oxides.
p-type doping
It was observed that using TMGa for the CBE of GaAs led to high p-type background doping (1020 cm−3) due to incorporated carbon. However, it was found that using TEGa instead of TMGa led to very clean GaAs with room temperature hole concentrations between 1014 and 1016 cm−3. It has been demonstrated that the hole concentrations can be adjusted between 1014 and 1021 cm−3 by just adjusting the alkyl beam pressure and the TMGa/TEGa ratio, providing means for achieving high and controllable p-type doping of GaAs. This has been exploited for fabricating high quality heterojunction bipolar transistors.
Advantages and disadvantages
CBE offers many other advantages over its parent techniques of MOCVD and MBE, some of which are listed below:
Advantages over MBE
Easier multiwafer scaleup: Substrate rotation is required for uniformity in thickness and conformity since MBE has individual effusion cells for each element. Large effusion cells and efficient heat dissipation make multiwafer scaleup more difficult.
Better for production environment: Instant flux response due to precision electronic control flow.
Absence of oval defects: These oval defects generally arise from micro-droplets of Ga or In spit out from high temperature effusion cells. These defects vary in size and density system-to-system and time-to-time.
Lower drifts in effusion conditions that do not depend on effusive source filling.
In recent developments patented by ABCD Technology, substrate rotation is no longer required.
Advantages over MOCVD
Easy implementation of in-situ diagnostic instruments such as RHEED.
Compatibility with other high vacuum thin-film processing methods, such as metal evaporation and ion implantation.
Shortcomings of CBE
More pumping required compared to MOCVD.
Composition control can be difficult when growing GaInAs. At high temperature, we have a better incorporation of Ga, but we face the problem related desorption of In.
So, a compromise should be found between high and low temperature for a good composition control.
High carbon incorporation for GaAlAs.
See also
Epitaxy
Molecular beam epitaxy
MOVPE
Compound semiconductor
Chemical vapor deposition
Metalorganics
Thin-film deposition
RHEED
References
Chemical vapor deposition
Thin film deposition
Semiconductor growth | Chemical beam epitaxy | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,938 | [
"Thin film deposition",
"Coatings",
"Thin films",
"Chemical vapor deposition",
"Planes (geometry)",
"Solid state engineering"
] |
3,594,326 | https://en.wikipedia.org/wiki/Backpack%20helicopter | A backpack helicopter / helipack is a helicopter motor and rotor and controls assembly that can be strapped to a person's back, so they can walk about on the ground wearing it, and can use it to fly. It uses a harness like a parachute harness and should have a strap between the legs (so the pilot does not fall out of the harness during flight). Some designs may use a ducted fan design to increase upward thrust. Several inventors have tried to make backpack helicopters, with mixed results.
Typically, a backpack helicopter differs from a conventional helicopter in two main ways:
First, there is no tail rotor, and the main rotors are contra-rotating. Yaw is controlled by fine adjustment of a differential gear in the rotor drive transmission. When one rotor is adjusted to spin slightly faster than the other, it induces yaw (turning motion).
Second, the rotors are fixed pitch, which assists with simplicity; this means, however, that in the event of engine failure autorotation is impossible. Usually, a ballistic parachute would be incorporated for safety.
An edition of Popular Science magazine in 1969 featured a backpack helicopter that used small jet engines in a tip jet configuration instead of contra-rotating rotors. This design could function in autorotation. Related are devices like a backpack helicopter which also include a seat and leg supports, which are small, open-topped helicopters. In theory, a helicopter would be more efficient than a rocket-powered jetpack, possessing a greater specific impulse, and being more suited to hovering, due to the lower velocities of the propelled gases.
Australian electric company CopterPack had developed "an electric backpack helicopter with a self-levelling autopilot", and released test videos in June 2021. However, the device consists of two rotors with diameters around connected via carbon fiber tubes to a backpack with battery packs, and a pair of armrests with hand controls on them. Later video analysis revealed operator and equipment were at the end of a drop cable that was edited out using post-production software.
Examples
Pure backpacks
The Heliofly was a make which was designed in Germany in 1941 onwards.
The Pentecost HX-1 Hoppi-Copter was developed by Horace T. Pentecost, an independent inventor and demonstrated to the military in 1945.
Rhyme (made in Japan)
The Libelula ("dragonfly") from Tecnologia Aeroespacial Mexicana has a 2-bladed rotor driven by a small rocket motor at the end of each rotor blade. The company also manufactures a jetpack.
With a seat
SoloTrek XFV (Exo-skeletal Flying Vehicle).
Martin Jetpack
Vortech designed various models which have seats. They formerly also made a pure backpack model with two very long rotor blades driven by a little propane-powered jet motor at the end of each blade.
GEN H-4
Hirobo
Trek Aerospace's Springtail
See also
Baumgärtl Heliofly III
Jet pack
Hovercar
Ultralight aircraft
Gyrodyne RON Rotorcycle
Hiller ROE-1 / YROE-1 "Rotorcycle"
References
Aircraft configurations
Helicopters
Ultralight aircraft | Backpack helicopter | [
"Engineering"
] | 655 | [
"Aircraft configurations",
"Aerospace engineering"
] |
3,594,804 | https://en.wikipedia.org/wiki/Cyclic%20stress | Cyclic stress is the distribution of forces (aka stresses) that change over time in a repetitive fashion. As an example, consider one of the large wheels used to drive an aerial lift such as a ski lift. The wire cable wrapped around the wheel exerts a downward force on the wheel and the drive shaft supporting the wheel. Although the shaft, wheel, and cable move, the force remains nearly vertical relative to the ground. Thus a point on the surface of the drive shaft will undergo tension when it is pointing towards the ground and compression when it is pointing to the sky.
Types of cyclic stress
Cyclic stress is frequently encountered in rotating machinery where a bending moment is applied to a rotating part. This is called a cyclic bending stress and the aerial lift above is a good example. However, cyclic axial stresses and cyclic torsional stresses also exist. An example of cyclic axial stress would be a bungee cord (see bungee jumping), which must support the mass of people as they jump off structures such as bridges. When a person reaches the end of a cord, the cord deflects elastically and stops the person's descent. This creates a large axial stress in the cord. A fraction of the elastic potential energy stored in the cord is typically transferred back to the person, throwing the person upwards some fraction of the distance he or she fell. The person then falls on the cord again, inducing stress in the cord. This happens multiple times per jump. The same cord is used for several jumps, creating cyclical stresses in the cord that could eventually cause failure if not replaced.
Cyclic stress and material failure
When cyclic stresses are applied to a material, even though the stresses do not cause plastic deformation, the material may fail due to fatigue. Fatigue failure is typically modeled by decomposing cyclic stresses into mean and alternating components. Mean stress is the time average of the principal stress. The definition of alternating stress varies between different sources. It is either defined as the difference between the minimum and the maximum stress, or the difference between the mean and maximum stress. Engineers try to design mechanisms whose parts are subjected to a single type (bending, axial, or torsional) of cyclic stress because this more closely matches experiments used to characterize fatigue failure in different materials.
References
Materials science
Mechanics | Cyclic stress | [
"Physics",
"Materials_science",
"Engineering"
] | 463 | [
"Applied and interdisciplinary physics",
"Materials science",
"Mechanics",
"Mechanical engineering",
"nan"
] |
20,927,179 | https://en.wikipedia.org/wiki/Dixmier%20trace | In mathematics, the Dixmier trace, introduced by , is a non-normal trace on a space of linear operators on a Hilbert space larger than the space of trace class operators. Dixmier traces are examples of singular traces.
Some applications of Dixmier traces to noncommutative geometry are described in .
Definition
If H is a Hilbert space, then L1,∞(H) is the space of compact linear operators T on H such that the norm
is finite, where the numbers μi(T) are the eigenvalues of |T| arranged in decreasing order. Let
.
The Dixmier trace Trω(T) of T is defined for positive operators T of L1,∞(H) to be
where limω is a scale-invariant positive "extension" of the usual limit, to all bounded sequences. In other words, it has the following properties:
limω(αn) ≥ 0 if all αn ≥ 0 (positivity)
limω(αn) = lim(αn) whenever the ordinary limit exists
limω(α1, α1, α2, α2, α3, ...) = limω(αn) (scale invariance)
There are many such extensions (such as a Banach limit of α1, α2, α4, α8,...) so there are many different Dixmier traces.
As the Dixmier trace is linear, it extends by linearity to all operators of L1,∞(H).
If the Dixmier trace of an operator is independent of the choice of limω then the operator is called measurable.
Properties
Trω(T) is linear in T.
If T ≥ 0 then Trω(T) ≥ 0
If S is bounded then Trω(ST) = Trω(TS)
Trω(T) does not depend on the choice of inner product on H.
Trω(T) = 0 for all trace class operators T, but there are compact operators for which it is equal to 1.
A trace φ is called normal if φ(sup xα) = sup φ( xα) for every bounded increasing directed family of positive operators. Any normal trace on is equal to the usual trace, so the Dixmier trace is an example of a non-normal trace.
Examples
A compact self-adjoint operator with eigenvalues 1, 1/2, 1/3, ... has Dixmier trace equal to 1.
If the eigenvalues μi of the positive operator T have the property that
converges for Re(s)>1 and extends to a meromorphic function near s=1 with at most a simple pole at s=1, then the Dixmier trace
of T is the residue at s=1 (and in particular is independent of the choice of ω).
showed that Wodzicki's noncommutative residue of a pseudodifferential operator on a manifold M of order -dim(M) is equal to its Dixmier trace.
References
Albeverio, S.; Guido, D.; Ponosov, A.; Scarlatti, S.: Singular traces and compact operators. J. Funct. Anal. 137 (1996), no. 2, 281—302.
See also
Singular trace
Von Neumann algebras
Hilbert spaces
Operator theory
Trace theory | Dixmier trace | [
"Physics"
] | 718 | [
"Hilbert spaces",
"Quantum mechanics"
] |
20,928,420 | https://en.wikipedia.org/wiki/Translational%20research%20informatics | Translational research informatics (TRI) is a sister domain to or a sub-domain of biomedical informatics or medical informatics concerned with the application of informatics theory and methods to translational research. There is some overlap with the related domain of clinical research informatics, but TRI is more concerned with enabling multi-disciplinary research to accelerate clinical outcomes, with clinical trials often being the natural step beyond translational research.
Translational research as defined by the National Institutes of Health includes two areas of translation. One is the process of applying discoveries generated during research in the laboratory, and in preclinical studies, to the development of trials and studies in humans. The second area of translation concerns research aimed at enhancing the adoption of best practices in the community. Cost-effectiveness of prevention and treatment strategies is also an important part of translational research.
Overview
Translational research informatics can be described as "an integrated software solution to manage the: (i) logistics, (ii) data integration, and (iii) collaboration, required by translational investigators and their supporting institutions". It is the class of informatics systems that sits between and often interoperates with: (i) health information technology/electronic medical record systems, (ii) CTMS/clinical research informatics, and (iii) statistical analysis and data mining.
Translational research informatics is relatively new, with most CTSA awardee academic medical centers actively acquiring and integrating systems to enable the end-to-end TRI requirements. One advanced TRI system is being implemented at the Windber Research Institute in collaboration with GenoLogics and InforSense. Translational Research Informatics systems are expected to rapidly develop and evolve over the next couple of years.
Systems
CTRI-dedicated wiki
Further discussion of this domain can be found at the Clinical Research Informatics Wiki (CRI Wiki), a wiki dedicated to issues in clinical and translational research informatics.
See also
Bioinformatics
References
Bioinformatics
Laboratory information management system | Translational research informatics | [
"Chemistry",
"Engineering",
"Biology"
] | 409 | [
"Biological engineering",
"Bioinformatics stubs",
"Biotechnology stubs",
"Health informatics",
"Biochemistry stubs",
"Bioinformatics",
"Medical technology"
] |
130,280 | https://en.wikipedia.org/wiki/Short%20five%20lemma | In mathematics, especially homological algebra and other applications of abelian category theory, the short five lemma is a special case of the five lemma.
It states that for the following commutative diagram (in any abelian category, or in the category of groups), if the rows are short exact sequences, and if g and h are isomorphisms, then f is an isomorphism as well.
It follows immediately from the five lemma.
The essence of the lemma can be summarized as follows: if you have a homomorphism f from an object B to an object , and this homomorphism induces an isomorphism from a subobject A of B to a subobject of and also an isomorphism from the factor object B/A to /, then f itself is an isomorphism. Note however that the existence of f (such that the diagram commutes) has to be assumed from the start; two objects B and that simply have isomorphic sub- and factor objects need not themselves be isomorphic (for example, in the category of abelian groups, B could be the cyclic group of order four and the Klein four-group).
References
Homological algebra
Lemmas in category theory | Short five lemma | [
"Mathematics"
] | 250 | [
"Mathematical structures",
"Lemmas in category theory",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
130,526 | https://en.wikipedia.org/wiki/Riemann%20curvature%20tensor | In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
It is a central mathematical tool in the theory of general relativity, the modern theory of gravity. The curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation.
Definition
Let be a Riemannian or pseudo-Riemannian manifold, and be the space of all vector fields on . We define the Riemann curvature tensor as a map by the following formula where is the Levi-Civita connection:
or equivalently
where is the Lie bracket of vector fields and is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence, is a -tensor field. For fixed , the linear transformation is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign.
The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space).
Since the Levi-Civita connection is torsion-free, its curvature can also be expressed in terms of the second covariant derivative
which depends only on the values of at a point.
The curvature can then be written as
Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation, The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector with itself:
This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows
This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets:
where
It is sometimes convenient to also define the purely covariant version of the curvature tensor by
Geometric meaning
Informally
One can see the effects of curved space by comparing a tennis court and the Earth. Start at the lower right corner of the tennis court, with a racket held out towards north. Then while walking around the outline of the court, at each step make sure the tennis racket is maintained in the same orientation, parallel to its previous positions. Once the loop is complete the tennis racket will be parallel to its initial starting position. This is because tennis courts are built so the surface is flat. On the other hand, the surface of the Earth is curved: we can complete a loop on the surface of the Earth. Starting at the equator, point a tennis racket north along the surface of the Earth. Once again the tennis racket should always remain parallel to its previous position, using the local plane of the horizon as a reference. For this path, first walk to the north pole, then walk sideways (i.e. without turning), then down to the equator, and finally walk backwards to your starting position. Now the tennis racket will be pointing towards the west, even though when you began your journey it pointed north and you never turned your body. This process is akin to parallel transporting a vector along the path and the difference identifies how lines which appear "straight" are only "straight" locally. Each time a loop is completed the tennis racket will be deflected further from its initial position by an amount depending on the distance and the curvature of the surface. It is possible to identify paths along a curved surface where parallel transport works as it does on flat space. These are the geodesics of the space, for example any segment of a great circle of a sphere.
The concept of a curved space in mathematics differs from conversational usage. For example, if the above process was completed on a cylinder one would find that it is not curved overall as the curvature around the cylinder cancels with the flatness along the cylinder, which is a consequence of Gaussian curvature and Gauss's Theorema Egregium. A familiar example of this is a floppy pizza slice, which will remain rigid along its length if it is curved along its width.
The Riemann curvature tensor is a way to capture a measure of the intrinsic curvature. When you write it down in terms of its components (like writing down the components of a vector), it consists of a multi-dimensional array of sums and products of partial derivatives (some of those partial derivatives can be thought of as akin to capturing the curvature imposed upon someone walking in straight lines on a curved surface).
Formally
When a vector in a Euclidean space is parallel transported around a loop, it will again point in the initial direction after returning to its original position. However, this property does not hold in the general case. The Riemann curvature tensor directly measures the failure of this in a general Riemannian manifold. This failure is known as the non-holonomy of the manifold.
Let be a curve in a Riemannian manifold . Denote by the parallel transport map along . The parallel transport maps are related to the covariant derivative by
for each vector field defined along the curve.
Suppose that and are a pair of commuting vector fields. Each of these fields generates a one-parameter group of diffeomorphisms in a neighborhood of . Denote by and , respectively, the parallel transports along the flows of and for time . Parallel transport of a vector around the quadrilateral with sides , , , is given by
The difference between this and measures the failure of parallel transport to return to its original position in the tangent space . Shrinking the loop by sending gives the infinitesimal description of this deviation:
where is the Riemann curvature tensor.
Coordinate expression
Converting to the tensor index notation, the Riemann curvature tensor is given by
where are the coordinate vector fields. The above expression can be written using Christoffel symbols:
(See also List of formulas in Riemannian geometry).
Symmetries and identities
The Riemann curvature tensor has the following symmetries and identities:
where the bracket refers to the inner product on the tangent space induced by the metric tensor and
the brackets and parentheses on the indices denote the antisymmetrization and symmetrization operators, respectively. If there is nonzero torsion, the Bianchi identities involve the torsion tensor.
The first (algebraic) Bianchi identity was discovered by Ricci, but is often called the first Bianchi identity or algebraic Bianchi identity, because it looks similar to the differential Bianchi identity.
The first three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one can find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has independent components. Interchange symmetry follows from these. The algebraic symmetries are also equivalent to saying that R belongs to the image of the Young symmetrizer corresponding to the partition 2+2.
On a Riemannian manifold one has the covariant derivative and the Bianchi identity (often called the second Bianchi identity or differential Bianchi identity) takes the form of the last identity in the table.
Ricci curvature
The Ricci curvature tensor is the contraction of the first and third indices of the Riemann tensor.
Special cases
Surfaces
For a two-dimensional surface, the Bianchi identities imply that the Riemann tensor has only one independent component, which means that the Ricci scalar completely determines the Riemann tensor. There is only one valid expression for the Riemann tensor which fits the required symmetries:
and by contracting with the metric twice we find the explicit form:
where is the metric tensor and is a function called the Gaussian curvature and , , and take values either 1 or 2. The Riemann tensor has only one functionally independent component. The Gaussian curvature coincides with the sectional curvature of the surface. It is also exactly half the scalar curvature of the 2-manifold, while the Ricci curvature tensor of the surface is simply given by
Space forms
A Riemannian manifold is a space form if its sectional curvature is equal to a constant . The Riemann tensor of a space form is given by
Conversely, except in dimension 2, if the curvature of a Riemannian manifold has this form for some function , then the Bianchi identities imply that is constant and thus that the manifold is (locally) a space form.
See also
Introduction to the mathematics of general relativity
Decomposition of the Riemann curvature tensor
Curvature of Riemannian manifolds
Ricci curvature tensor
Theorems about circles
Citations
References
Bernhard Riemann
Curvature (mathematics)
Differential geometry
Riemannian geometry
Riemannian manifolds
Tensors in general relativity | Riemann curvature tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 2,027 | [
"Geometric measurement",
"Tensors",
"Physical quantities",
"Tensor physical quantities",
"Space (mathematics)",
"Metric spaces",
"Riemannian manifolds",
"Tensors in general relativity",
"Curvature (mathematics)"
] |
26,774,926 | https://en.wikipedia.org/wiki/Fritz%20John%20conditions | The Fritz John conditions (abbr. FJ conditions), in mathematics, are a necessary condition for a solution in nonlinear programming to be optimal. They are used as lemma in the proof of the Karush–Kuhn–Tucker conditions, but they are relevant on their own.
We consider the following optimization problem:
where ƒ is the function to be minimized, the inequality constraints and the equality constraints, and where, respectively, , and are the indices sets of inactive, active and equality constraints and is an optimal solution of , then there exists a non-zero vector such that:
if the and are linearly independent or, more generally, when a constraint qualification holds.
Named after Fritz John, these conditions are equivalent to the Karush–Kuhn–Tucker conditions in the case . When , the condition is equivalent to the violation of Mangasarian–Fromovitz constraint qualification (MFCQ). In other words, the Fritz John condition is equivalent to the optimality condition KKT or not-MFCQ.
References
Further reading
Mathematical optimization | Fritz John conditions | [
"Mathematics"
] | 218 | [
"Mathematical optimization",
"Mathematical analysis"
] |
23,944,162 | https://en.wikipedia.org/wiki/Press%E2%80%93Schechter%20formalism | The Press–Schechter formalism is a mathematical model for predicting the number of objects (such as galaxies, galaxy clusters or dark matter halos) of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974.
Background
In the context of cold dark matter cosmological models, perturbations on all scales are imprinted on the universe at very early times, for example by quantum fluctuations during an inflationary era. Later, as radiation redshifts away, these become mass perturbations, and they start to grow linearly. Only long after that, starting with small mass scales and advancing over time to larger mass scales, do the perturbations actually collapse to form (for example) galaxies or clusters of galaxies, in so-called hierarchical structure formation (see Physical cosmology).
Press and Schechter observed that the fraction of mass in collapsed objects more massive than some mass M is related to the fraction of volume samples in which the smoothed initial density fluctuations are above some density threshold. This yields a formula for the mass function (distribution of masses) of objects at any given time.
Result
The Press–Schechter formalism predicts that the number of objects with mass between and is:
where is the index of the power spectrum of the fluctuations in the early universe , is the mean (baryonic and dark) matter density of the universe at the time the fluctuation from which the object was formed had gravitationally collapsed, and is a cut-off mass below which structures will form. Its value is:
is the standard deviation per unit volume of the fluctuation from which the object was formed had gravitationally collapsed, at the time of the gravitational collapse, and R is the scale of the universe at that time. Parameters with subscript 0 are at the time of the initial creation of the fluctuations (or any later time before the gravitational collapse).
Qualitatively, the prediction is that the mass distribution is a power law for
small masses, with an exponential cutoff above some characteristic mass that
increases with time. Such functions had previously been noted by Schechter
as observed luminosity functions,
and are now known as Schechter luminosity functions. The Press-Schechter
formalism provided the first quantitative model for how such functions might
arise.
The case of a scale-free power spectrum, n=0 (or, equivalently, a scalar spectral index of 1), is very close to the spectrum of the current standard cosmological model. In this case, has a simpler form. Written in mass-free units:
Assumptions and derivation sketch
The Press–Schechter formalism is derived through three key assumptions:
Matter in the Universe has perturbations following a Gaussian distribution and the variance of this distribution is scale-dependent, given by the power spectrum
Matter perturbations grow linearly with the growth function
Halos are spherical, virialized overdensities with a density above a critical density
In other words, fluctuations are small at some early cosmological time, and grow until they cross a threshold ending in gravitational collapse into a halo. These perturbations are modeled linearly, even though the eventual collapse is itself a non-linear process.
We introduce the smoothed density field given by averaged over a sphere with center and mass contained inside (i.e., is convolved with a top-hat window function). The sphere radius is of order Then if a halo exists at with mass at least
Since perturbations are Gaussian distributed with an average 0 and variance we can directly compute the probability of halos forming with masses at least as
Implicitly, and depend on redshift, so the above probability does as well. The variance given in the 1974 paper is
where is the mass standard deviation in the volume of the fluctuation.
Note, that in the limit of large perturbations we expect all matter to be contained in halos such that However, the above equation gives us the limit One can make an ad-hoc argument and say that negative perturbations are not contributing in this scheme so that we are mistakenly leaving out half of the mass. And so, the Press-Schechter ansatz is
the fraction of matter contained in halos of mass
A fractional fluctuation ; at some cosmological time reaches gravitational collapse after the universe has expanded by a factor of 1/δ since that time. Using this, the normal distribution of the fluctuations, written in terms of the , , and gives the Press-Schechter formula.
Generalizations
A number of generalizations of the Press–Schechter formula exist, such as the Sheth–Tormen approximation.
References
Astrophysics
Equations of astronomy
Mathematical modeling | Press–Schechter formalism | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,002 | [
"Mathematical modeling",
"Concepts in astronomy",
"Applied mathematics",
"Astrophysics",
"Equations of astronomy",
"Astronomical sub-disciplines"
] |
23,947,350 | https://en.wikipedia.org/wiki/Collapsin%20response%20mediator%20protein%20family | Collapsin response mediator protein family or CRMP family consists of five intracellular phosphoproteins (CRMP-1, CRMP-2, CRMP-3, CRMP-4, CRMP-5) of similar molecular size (60–66 kDa) and high (50–70%) amino acid sequence identity. CRMPs are predominantly expressed in the nervous system during development and play important roles in axon formation from neurites and in growth cone guidance and collapse through their interactions with microtubules. Cleaved forms of CRMPs have also been linked to neuron degeneration after trauma induced injury.
The modulation of CRMP-2 expression through various pharmaceuticals is a new and expanding area of research. By discovering chemicals that can either increase or decrease CRMP-2 expression, scientists can potentially reduce the effects of neurological diseases such as Alzheimer's disease and Parkinson's disease.
History
Members of the CRMP family were discovered independently in different species by several groups working in parallel. Among the five members of the family, CRMP-2 was first identified in 1995. Group of researchers led by Goshima found out that CRMP-2 played a role in the transduction of the extracellular Semaphorin 3A (Sema3A), an inhibitory protein for axonal guidance in chick dorsal root ganglion (DRG). The protein was first named as CRMP-62 having a relative molecular mass of 62 kDa and later referred as CRMP-2. Concurrently, a 64 kDa protein named as TOAD-64 for Turned On After Division, was shown to increase significantly during the development of the cortex of the brain. The cDNA sequence of TOAD-64 corresponded to that of rat CRMP-2. In 1996, mouse CRMP-4, often referred to as Ulip for Unc-33 like phosphoprotein, was discovered by Byk and colleagues, using a rabbit polyclonal antiserum which recognized a 64 kDa mouse brain specific phosphoprotein. In the same year, several other studies cloned CRMPs-1-4 in rat and dihydropyrimidinase (DHPase) homologous sequence of CRMPs-1, -2, and -4 in human fetal brain. Finally, in 2000, CRMP-5 was discovered using two-hybrid screenings of brain libraries or purification from a proteic complex. In following researches, CRMPs were studied as target antigens for autoantibodies in various autoimmune neurodegenerative disorders.
Structure
CRMP1-5 are between 564 and 572 amino acids and these proteins are found to be approximately 95% conserved between mouse and human. The protein sequence of CRMP1-4 is approximately 75% homologous with each other, while CRMP5 is only 50-51% homologous with each of the other CRMPs. Additionally, CRMPs are homologs of Unc-33 whose mutation causes impaired ability to form neural circuits and uncoordinated mobility in Caenorhabditis elegans. CRMP1-4 genes are roughly 60% homologous with the tetramer liver dihydropyrimidinase (DHPase), and also possess a similar structure to members of the metal-dependent amidohydrolases. However, the fact that CRMPs are not enzymatic reveals that they might lack the critical His residues that are present in amidohydrolase enzymes to allow them to bind metal atoms to their active site.
Additionally, CRMPs can exist as homotetramers or as heterotetramers. The tetramers are positioned so that the active residues on the N-terminal are located on the outside of the complex. This allows CRMP to regulate various factors in the cytoplasm. Gel filtration analysis has shown that CRMP-5 and CRMP-1 form weaker homo-tetramers compared with CRMP-2, and that divalent cations, Ca2+ and Mg2+, destabilize oligomers of CRMP-5 and CRMP-1, but promote CRMP-2 oligomerization. The C-terminus consists of 80 amino acids and is the site of phosphorylation for various kinases.
Expression
The expression of CRMPs is regulated throughout development of the nervous system. In general, CRMPs are highly expressed in post-mitotic nerve cells since early embryonic life. In the developing nervous system, each CRMP displays a distinct expression pattern both in time and space. For example, in the external granular layer (EGL), where mitosis of cerebellar granular neuron occurs, CRMP-2 is highly expressed while CRMP-5 is never expressed. However, CRMP-2 and CRMP-5 are found to be co-expressed in post-mitotic granular neurons. CRMP expression is highest when neurons and synaptic connections mature actively during the first postnatal week, suggesting CRMPs’ role in neuronal migration, differentiation and axonal growth. Indeed, CRMP-2 expression is induced by neuronal differentiation promoting factors such as noggin, chordin, GDNF, and FGF.
In the adult nervous system, CRMP expression is significantly downregulated and limited in areas associated with brain plasticity, neurogenesis, or regeneration. CRMP1 mRNA is mainly expressed in Purkinje cells of the cerebellum. Among the five members of the CRMP family, CRMP-2 is the most highly expressed in the adult brain, especially in post-mitotic neurons of the olfactory system, cerebellum, and hippocampus. CRMP-3 mRNA is only expressed in the granular layer of the cerebellum, inferior olive, and dentate gyrus of the hippocampus. CRMP-4 is the least expressed protein of CRMP family and its expression is restricted to the olfactory bulb, hippocampus, and the internal granule layer (IGL) of the cerebellum. Lastly, CRMP-5 is expressed not only in post-mitotic neurons of the olfactory bulb, olfactory epithelium, and dentate gyrus of the hippocampus, but also in peripheral nerve axons and sensory neurons. Other families of CRMP also appear in peripheral tissues. Expression of CRMPs-1, -4, and -5 in the adult testis is detected only in the cell spermatid stage and CRMP-2 mRNA is found in lung tissue of the fetal mouse and adult human.
The expression of CRMPs also can be found in the death or survival signaling of postmitotic neurons. Although CRMP is a cytosolic protein, significant amount of CRMP expression is detected as membrane associated at the leading edge of the growth cone lamellipodium and filopodia. Also, injury-induced CRMPs expression is found in sprouting fibers in both the central and peripheral nervous system. CRMP-4 expression is promoted upon ischemic injury and is associated with neurons having intact morphology, suggesting that CRMP-4 provides a survival signal and may be involved in regeneration of neurons. Similarly, CRMP-2 has been suggested to participate in the survival and maintenance in postmitotic neurons as its over-expression accelerates nerve regeneration. However, CRMP-2 may also be involved in neuronal death as its expression is upregulated during the early stages of dopamine-induced neuronal apoptosis in cerebellar granule neurons.
Mechanism, Function and Regulation
Axonal formation in developing neuron
CRMP-2 plays a role in neuronal polarity. Extensions of early neurons called lamellipodia form the early neurites. The neurites are indistinguishable between dendrites and the axon during this stage. One of these neurites eventually becomes the axon and grows longer than the dendritic neurites. CRMP-2 helps facilitate the rate of this axonal growth through its interactions with microtubules. CRMP-2 binds to and copolymerizes with tubulin heterodimers but does not bind as well to polymerized tubulin. This binding specificity promotes tubulin polymerization in vitro. CRMP-2/tubulin complexes are found in the distal part of the axon and modulate microtubule dynamics by controlling the rate of microtubule assembly. CRMP-2 also contributes to the establishment of neuronal polarity by regulating polarized Numb-mediated endocytosis at the axonal growth cones. In both cases, phosphorylation of CRMP-2 at Thr-555 by Rho kinase or at Thr-509, Thr-514 or Ser-518 by GSK-3β inactivates the protein by lowering binding affinity to tubulin and Numb.
Axonal growth cone guidance
In the developing nervous system, CRMPs’ involvement in axonal guidance has been proposed by localization of CRMPs in neurites and axonal growth cones. CRMPs participate in two distinct transduction pathways inducing axonal growth cone collapse. Both pathways involve Rho family GTPases, RhoA and Rac1, in their signaling cascade. Rho family GTPases regulate the cytoskeletal reorganization of the growth cone and affect the growth cone motility.
In Sema3A signaling cascade, CRMP plays a role as intracellular messenger mediating repulsive signal. Sema3A initiates clustering of the receptor neuropilin 1 and plexin A1. While some of the other class of semaphorins directly bind to plexin receptors, Sema3A does not bind to plexin directly. Instead, it interacts with neuropilins as ligand-binding co-receptor for plexin and releases plexin-based signaling. The signal transduction pathway downstream of activated plexin receptor is mediated by CRMPs. In response to Sema3A signaling cascade, CRMPs which exist as a heterotetramer in the cytosol bind to the cytosolic domain of PlexA and its conformation changes. Further, CRMPs are phosphorylated by Cdk5, GSK3B, and Fes, a tyrosine protein kinase. Especially, phosphorylation of CRMP-1 and CRMP-2 are essential for Sema3A-regulated axonal guidance. In the presence of CRMP-2, the signal can induce alterations of Rac-dependent pathway, which modulates the actin filament assembly in the growth cone. In the absence of Sema3A, the interaction between CRMP tetramer and PlexA is blocked. Phospholipase D2 (PLD-2) which is localized in the growth cone and is involved in actin cytoskeleton rearrangement, can be inhibited by CRMP-2 and its inhibition results in actin depolymerization and possibly affects axonal growth cone collapse. In the presence of CRMP-2, the signal can induce alterations of Rac-dependent pathway, which modulates the actin filament assembly in the growth cone.
CRMP-2 is also involved in another growth cone collapse signal induced by extracellular lysophosphatidic acid (LPA). A signal through seven-transmembrane receptor activates an intracellular pathway, RhoA and the downstream of RhoA, Rho-kinase subsequently phosphorylates CRMP-2 on Threonine-555 (Thr555). In DRG neurons, CRMP-2 is phosphorylated by Rho kinase in LPA signaling but not in Sema3A signaling, revealing the presence of both Rho kinase-dependent and Rho kinase-independent pathways for the growth cone collapse. In RhoA pathway, CRMP-1 interacts with Rho-kinase and modulates RhoA signaling. CRMP-2 can be regulated post-translationally by O-GluNAc (β-N-acetylglucosamine linked to hydroxyls of serine or threonine) as the modification blocks CRMP-2 from being phosphorylated.
Trauma induced degeneration
Cleaved CRMP products play a considerable role in the degeneration of axons as a result of trauma inflicted on the central nervous system (CNS). As a result of trauma induced on the CNS, glutamate activates NMDA receptors leading to an influx of calcium that activates the calcium-dependent protease calpain. It has been shown that activated calpain proteolytically cleaves CRMP-3, creating a cleavage product of CRMP that interacts with vital cytosolic and nuclear molecules to bring about neurodegeneration. The structure of this cleaved form of CRMP has not been determined yet, making it difficult to understand the protein-protein interactions that occur and why these forms are able to initiate neurodegeneration after CNS injury. Additionally, calpain inhibitors (ALLN) are shown to have prevented the CRMP‐3 cleavage and therefore no axonal degeneration or neuronal death, further suggesting that calpain targets CRMP-3 for cleavage during glutamate-induced neuronal death. Ca2+/calmodulin-dependent protein kinase II (CaMK II) is also activated by calcium influx through NMDA receptors, and is another possible activator of CRMP-3. CRMP-3 is not the only CRMP involved in neuronal degeneration brought upon by trauma and cerebral ischemia, as all CRMPs are in fact targeted for cleavage to help promote degeneration.
List of CRMPs (and associated knockout phenotypes and derived functions)
Clinical significance
The expression of CRMPs is altered in neurodegenerative diseases and these proteins likely play an essential role in the pathogenesis of disorders in the nervous system, including Alzheimer's disease, Parkinson's disease, schizophrenia, and many others. One pharmaceutical that is relatively effective in targeting CRMP-2 to reduce the results of a neurodegenerative disease is lacosamide. Lacosamide is used in combination with other types of medications to control various types of seizures, especially epilepsy. One of the ways lacosamide does this is by modulating CRMP-2, thus inducing neuroprotective effects and decreasing the epileptic effects in people with epilepsy.
CRMP-2 phosphorylated at Thr-509, Ser-518, and Ser-522 has been connected to the degenerating neuritis in Alzheimer's disease. Studies suggest that glycogen synthase kinase-3β (GSK-3β) and cyclin-dependent protein kinase 5 (Cdk5) are highly expressed in Alzheimer's disease and are some of the protein kinases responsible for inactivating CRMP-2 in Alzheimer's disease. This inactivation of CRMP-2 in people with Alzheimer's disease promotes the expression of neurofibrillary tangles and plaque neurites which are consistent with people with this disease. CRMP-2 is also related to bipolar disorder and schizophrenia, likely as a result of the phosphorylation of CRMP-2 by GSK-3β.
References
Molecular neuroscience
Protein families | Collapsin response mediator protein family | [
"Chemistry",
"Biology"
] | 3,271 | [
"Protein families",
"Molecular neuroscience",
"Protein classification",
"Molecular biology"
] |
23,950,557 | https://en.wikipedia.org/wiki/Law%20of%20total%20covariance | In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if X, Y, and Z are random variables on the same probability space, and the covariance of X and Y is finite, then
The nomenclature in this article's title parallels the phrase law of total variance. Some writers on probability call this the "conditional covariance formula" or use other names.
Note: The conditional expected values E( X | Z ) and E( Y | Z ) are random variables whose values depend on the value of Z. Note that the conditional expected value of X given the event Z = z is a function of z. If we write E( X | Z = z) = g(z) then the random variable E( X | Z ) is g(Z). Similar comments apply to the conditional covariance.
Proof
The law of total covariance can be proved using the law of total expectation: First,
from a simple standard identity on covariances. Then we apply the law of total expectation by conditioning on the random variable Z:
Now we rewrite the term inside the first expectation using the definition of covariance:
Since expectation of a sum is the sum of expectations, we can regroup the terms:
Finally, we recognize the final two terms as the covariance of the conditional expectations E[X | Z] and E[Y | Z]:
See also
Law of total variance, a special case corresponding to X = Y.
Law of total cumulance, of which the law of total covariance is a special case.
Notes and references
Algebra of random variables
Covariance and correlation
Articles containing proofs
Theory of probability distributions
Theorems in statistics
Statistical laws | Law of total covariance | [
"Mathematics"
] | 362 | [
"Mathematical problems",
"Articles containing proofs",
"Mathematical theorems",
"Theorems in statistics"
] |
23,951,474 | https://en.wikipedia.org/wiki/Hydroxystenozole | Hydroxystenozole (), also known as 17α-methylandrost-4-eno[3,2-c]pyrazol-17β-ol, is an orally active androgen/anabolic steroid (AAS) and a 17α-alkylated derivative of testosterone that was described in the literature in 1967 but was never marketed. It is closely related to stanozolol (17α-methyl-5α-androstano[3,2-c]pyrazol-17β-ol), differing from it only in hydrogenation (i.e., double bonds and their placement).
References
Abandoned drugs
1-Methylcyclopentanols
Anabolic–androgenic steroids
Androstanes
Pyrazoles | Hydroxystenozole | [
"Chemistry"
] | 169 | [
"Drug safety",
"Abandoned drugs"
] |
23,951,488 | https://en.wikipedia.org/wiki/Roxibolone | Roxibolone (INN) (developmental code name BR-906), also known as 11β,17β-dihydroxy-17α-methyl-3-oxoandrosta-1,4-diene-2-carboxylic acid, is a steroidal antiglucocorticoid described as an anticholesterolemic (cholesterol-lowering) and anabolic drug which was never marketed. Roxibolone is closely related to formebolone, which shows antiglucocorticoid activity similarly and, with the exception of having a carboxaldehyde group at the C2 position instead of a carboxylic acid group, roxibolone is structurally almost identical to. The 2-decyl ester of roxibolone, decylroxibolone (developmental code name BR-917), is a long-acting prodrug of roxibolone with similar activity.
In rats, roxibolone counteracts the catabolic effects (control of nitrogen balance) and increased alkaline phosphatase levels induced by the potent glucocorticoid dexamethasone phosphate. It does not bind to the glucocorticoid receptor however, and its antiglucocorticoid activity may instead be mediated by enzyme inhibition. In accordance, 11α- and 11β-hydroxyprogesterone are known to be potent inhibitors of 11β-hydroxysteroid dehydrogenase (11β-HSD), which is responsible for the biosynthesis of the potent endogenous glucocorticoids cortisol and corticosterone (from the precursors deoxycortisol and deoxycorticosterone, respectively). As roxibolone is 11β-hydroxylated similarly, it may act in a likewise fashion. However, formebolone was found to be a very weak inhibitor of 11β-HSD type 2, although this specific isoenzyme is responsible for the inactivation of glucocorticoids rather than their production.
Unlike formebolone, which is additionally an anabolic-androgenic steroid (AAS), roxibolone is devoid of affinity for the androgen receptor and possesses no androgenic or myotrophic activity in animal assays. For this reason, it has been said that roxibolone may be much better tolerated in comparison.
References
Androstanes
Antiglucocorticoids
Carboxylic acids
Hypolipidemic agents
Ketones | Roxibolone | [
"Chemistry"
] | 554 | [
"Ketones",
"Carboxylic acids",
"Functional groups"
] |
23,951,520 | https://en.wikipedia.org/wiki/Tiomesterone | Tiomesterone (INN, JAN; thiomesterone (BAN); also known as 1α,7α-bis(acetylthio)-17α-methylandrost-4-en-17β-ol-3-one; developmental code StA 307; brand names Emdabol, Embadol, Emdabolin, and Protabol) is a synthetic, orally active anabolic-androgenic steroid (AAS) and a 17α-alkylated derivative of testosterone. It was described in 1963.
References
Anabolic–androgenic steroids
Androstanes
Hepatotoxins
Thioesters | Tiomesterone | [
"Chemistry"
] | 143 | [
"Thioesters",
"Functional groups"
] |
19,797,453 | https://en.wikipedia.org/wiki/Builders%20hardware | Builders' hardware or just builders hardware is a group of metal hardware specifically used for protection, decoration, and convenience in buildings. Building products do not make any part of a building, rather they support them and make them work. It usually supports fixtures like windows, doors, and cabinets. Common examples include door handles, door hinges, deadbolts, latches, numerals, letter plates, switch plates, and door knockers.
Builders hardware is commonly available in brass, steel, aluminium, stainless steel, and iron.
Well known suppliers of builders hardware mainly exist in China, India, Mexico and some in the U.S.
Classifications
While builders hardware is classified by supplying at least one of the three attributes listed above, it is usually broken down by where it is used, or by usage.
Bathroom hardware
Bathroom hardware includes the products that are used in constructing and maintaining the bathroom appearance and decoration. Bathroom products includes faucets, showers, holders, tubs, shelves, mirrors etc.
Door hardware
All those products that are used either in door decoration, maintenance, or in any other function come under door hardware, such as door handles, fasteners, hinges, hooks, number plates, knockers, etc.
Furniture hardware
Furniture hardware are those products that are used to support the furniture look, design and durability. Furniture hardware products include furniture frames, furniture legs, furniture arms, etc.
Safety & security hardware
Buildings, goods and their occupants needs protection from fire, intruders, and other external agents. Proper protection systems include fire safe security system, home monitoring, smoke detectors, locksets, window guards, etc.
Plumbing hardware
Plumbing hardware products are used for supplying water throughout the building using hose, pipes and tubes. These hardware products ensure that water is supplied properly and continuously. Since water runs or remains all the time in these products, it is needed that the materials with which these products are highly corrosion resistant and can withstand extreme temperatures. The most common materials are copper, aluminum, steel and PVC.
Cabinet hardware
The products that are used to make cabinets working come under cabinet hardware like cabinet fasteners, brackets, latches, hinges, pulls, locks, etc. Cabinet hardware are small components that make cabinets functional. These products are made of materials like plastics, metals and may be glasses.
Window hardware
Window hardware does not include window itself rather they are smaller components that are used to install, fix and protect windows, such as window extrusions, fasteners, handles, hinges, locks and many more.
Curtain hardware
Curtain hardware includes products like hooks, curtain rings, curtain finials, etc. These products are used to hang curtain at doors, windows, verandas, etc. Curtain hooks and poles are used to handle and move the curtains. Curtain hardware products are made of varieties of materials including metals and plastics. Mostly aluminum and iron are used for making rings, hooks, rods and poles.
See also
Architectural ironmongery
References
Bibliography
.
Hardware (mechanical) | Builders hardware | [
"Physics",
"Technology",
"Engineering"
] | 606 | [
"Physical systems",
"Machines",
"Hardware (mechanical)",
"Construction"
] |
19,798,519 | https://en.wikipedia.org/wiki/Space%20vector%20modulation | Space vector modulation (SVM) is an algorithm for the control of pulse-width modulation (PWM), invented by Gerhard Pfaff, Alois Weschta, and Albert Wick in 1982. It is used for the creation of alternating current (AC) waveforms; most commonly to drive 3 phase AC powered motors at varying speeds from DC using multiple class-D amplifiers. There are variations of SVM that result in different quality and computational requirements. One active area of development is in the reduction of total harmonic distortion (THD) created by the rapid switching inherent to these algorithms.
Principle
A three-phase inverter as shown to the right converts a DC supply, via a series of switches, to three output legs which could be connected to a three-phase motor.
The switches must be controlled so that at no time are both switches in the same leg turned on or else the DC supply would be shorted. This requirement may be met by the complementary operation of the switches within a leg. i.e. if A+ is on then A− is off and vice versa. This leads to eight possible switching vectors for the inverter, V0 through V7 with six active switching vectors and two zero vectors.
Note that looking down the columns for the active switching vectors V1-6, the output voltages vary as a pulsed sinusoid, with each leg offset by 120 degrees of phase angle.
To implement space vector modulation, a reference signal Vref is sampled with a frequency fs (Ts = 1/fs). The reference signal may be generated from three separate phase references using the transform. The reference vector is then synthesized using a combination of the two adjacent active switching vectors and one or both of the zero vectors. Various strategies of selecting the order of the vectors and which zero vector(s) to use exist. Strategy selection will affect the harmonic content and the .
More complicated SVM strategies for the unbalanced operation of four-leg three-phase inverters do exist. In these strategies the switching vectors define a 3D shape (a hexagonal prism in coordinates or a dodecahedron in abc coordinates) rather than a 2D hexagon. General SVM techniques are also available for converters with any number of legs and levels.
See also
αβγ transform
Inverter (electrical)
pulse-width modulation
References
Electrical engineering
Control theory | Space vector modulation | [
"Mathematics",
"Engineering"
] | 488 | [
"Electrical engineering",
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
19,804,030 | https://en.wikipedia.org/wiki/Ethyl%20cellulose | Ethyl cellulose (or ethylcellulose) is a derivative of cellulose in which some of the hydroxyl groups on the repeating glucose units are converted into ethyl ether groups. The number of ethyl groups can vary depending on the manufacturer.
It is mainly used as a thin-film coating material for coating paper, vitamin and medical pills, and for thickeners in cosmetics and in industrial processes.
Food grade ethyl cellulose is one of few non-toxic films and thickeners which are not water-soluble. This property allows it to be used to safeguard ingredients from water.
Ethyl cellulose is also used as a food additive as an emulsifier (E462).
Ethyl cellulose is commonly used as a coating material for tablets and capsules, as it provides a protective barrier that prevents the active ingredients from being released too quickly in the digestive system. EC is also used as a binder, thickener, and stabilizer in a variety of food, cosmetic, and pharmaceutical products.
See also
Ethyl methyl cellulose
Methyl cellulose
References
Cellulose
Food additives
Excipients
Cellulose ethers
E-number additives | Ethyl cellulose | [
"Chemistry"
] | 252 | [
"Polymer stubs",
"Organic chemistry stubs"
] |
18,673,708 | https://en.wikipedia.org/wiki/Bass%E2%80%93Serre%20theory | Bass–Serre theory is a part of the mathematical subject of group theory that deals with analyzing the algebraic structure of groups acting by automorphisms on simplicial trees. The theory relates group actions on trees with decomposing groups as iterated applications of the operations of free product with amalgamation and HNN extension, via the notion of the fundamental group of a graph of groups. Bass–Serre theory can be regarded as one-dimensional version of the orbifold theory.
History
Bass–Serre theory was developed by Jean-Pierre Serre in the 1970s and formalized in Trees, Serre's 1977 monograph (developed in collaboration with Hyman Bass) on the subject. Serre's original motivation was to understand the structure of certain algebraic groups whose Bruhat–Tits buildings are trees. However, the theory quickly became a standard tool of geometric group theory and geometric topology, particularly the study of 3-manifolds. Subsequent work of Bass contributed substantially to the formalization and development of basic tools of the theory and currently the term "Bass–Serre theory" is widely used to describe the subject.
Mathematically, Bass–Serre theory builds on exploiting and generalizing the properties of two older group-theoretic constructions: free product with amalgamation and HNN extension. However, unlike the traditional algebraic study of these two constructions, Bass–Serre theory uses the geometric language of covering theory and fundamental groups. Graphs of groups, which are the basic objects of Bass–Serre theory, can be viewed as one-dimensional versions of orbifolds.
Apart from Serre's book, the basic treatment of Bass–Serre theory is available in the article of Bass, the article of G. Peter Scott and C. T. C. Wall and the books of Allen Hatcher, Gilbert Baumslag, Warren Dicks and Martin Dunwoody and Daniel E. Cohen.
Basic set-up
Graphs in the sense of Serre
Serre's formalism of graphs is slightly different from the standard formalism from graph theory. Here a graph A consists of a vertex set V, an edge set E, an edge reversal map such that ≠ e and for every e in E, and an initial vertex map . Thus in A every edge e comes equipped with its formal inverse . The vertex o(e) is called the origin or the initial vertex of e and the vertex o() is called the terminus of e and is denoted t(e). Both loop-edges (that is, edges e such that o(e) = t(e)) and multiple edges are allowed. An orientation on A is a partition of E into the union of two disjoint subsets E+ and E− so that for every edge e exactly one of the edges from the pair e, belongs to E+ and the other belongs to E−.
Graphs of groups
A graph of groups A consists of the following data:
A connected graph A;
An assignment of a vertex group Av to every vertex v of A.
An assignment of an edge group Ae to every edge e of A so that we have for every e ∈ E.
Boundary monomorphisms for all edges e of A, so that each is an injective group homomorphism.
For every the map is also denoted by .
Fundamental group of a graph of groups
There are two equivalent definitions of the notion of the fundamental group of a graph of groups: the first is a direct algebraic definition via an explicit group presentation (as a certain iterated application of amalgamated free products and HNN extensions), and the second using the language of groupoids.
The algebraic definition is easier to state:
First, choose a spanning tree T in A. The fundamental group of A with respect to T, denoted π1(A, T), is defined as the quotient of the free product
where F(E) is a free group with free basis E, subject to the following relations:
for every e in E and every . (The so-called Bass–Serre relation.)
e = 1 for every e in E.
e = 1 for every edge e of the spanning tree T.
There is also a notion of the fundamental group of A with respect to a base-vertex v in V, denoted π1(A, v), which is defined using the formalism of groupoids. It turns out that for every choice of a base-vertex v and every spanning tree T in A the groups π1(A, T) and π1(A, v) are naturally isomorphic.
The fundamental group of a graph of groups has a natural topological interpretation as well: it is the fundamental group of a graph of spaces whose vertex spaces and edge spaces have the fundamental groups of the vertex groups and edge groups, respectively, and whose gluing maps induce the homomorphisms of the edge groups into the vertex groups. One can therefore take this as a third definition of the fundamental group of a graph of groups.
Fundamental groups of graphs of groups as iterations of amalgamated products and HNN-extensions
The group G = π1(A, T) defined above admits an algebraic description in terms of iterated amalgamated free products and HNN extensions. First, form a group B as a quotient of the free product
subject to the relations
e−1αe(g)e = ωe(g) for every e in E+T and every .
e = 1 for every e in E+T.
This presentation can be rewritten as
which shows that B is an iterated amalgamated free product of the vertex groups Av.
Then the group G = π1(A, T) has the presentation
which shows that G = π1(A, T) is a multiple HNN extension of B with stable letters .
Splittings
An isomorphism between a group G and the fundamental group of a graph of groups is called a splitting of G. If the edge groups in the splitting come from a particular class of groups (e.g. finite, cyclic, abelian, etc.), the splitting is said to be a splitting over that class. Thus a splitting where all edge groups are finite is called a splitting over finite groups.
Algebraically, a splitting of G with trivial edge groups corresponds to a free product decomposition
where F(X) is a free group with free basis X = E+(A−T) consisting of all positively oriented edges (with respect to some orientation on A) in the complement of some spanning tree T of A.
The normal forms theorem
Let g be an element of G = π1(A, T) represented as a product of the form
where e1, ..., en is a closed edge-path in A with the vertex sequence v0, v1, ..., vn = v0 (that is v0=o(e1), vn = t(en) and vi = t(ei) = o(ei+1) for 0 < i < n) and where for i = 0, ..., n.
Suppose that g = 1 in G. Then
either n = 0 and a0 = 1 in ,
or n > 0 and there is some 0 < i < n such that ei+1 = and .
The normal forms theorem immediately implies that the canonical homomorphisms Av → π1(A, T) are injective, so that we can think of the vertex groups Av as subgroups of G.
Higgins has given a nice version of the normal form using the fundamental groupoid of a graph of groups. This avoids choosing a base point or tree, and has been exploited by Moore.
Bass–Serre covering trees
To every graph of groups A, with a specified choice of a base-vertex, one can associate a Bass–Serre covering tree , which is a tree that comes equipped with a natural group action of the fundamental group π1(A, v) without edge-inversions.
Moreover, the quotient graph is isomorphic to A.
Similarly, if G is a group acting on a tree X without edge-inversions (that is, so that for every edge e of X and every g in G we have ge ≠ ), one can define the natural notion of a quotient graph of groups A. The underlying graph A of A is the quotient graph X/G. The vertex groups of A are isomorphic to vertex stabilizers in G of vertices of X and the edge groups of A are isomorphic to edge stabilizers in G of edges of X.
Moreover, if X was the Bass–Serre covering tree of a graph of groups A and if G = π1(A, v) then the quotient graph of groups for the action of G on X can be chosen to be naturally isomorphic to A.
Fundamental theorem of Bass–Serre theory
Let G be a group acting on a tree X without inversions. Let A be the quotient graph of groups and let v be a base-vertex in A. Then G is isomorphic to the group π1(A, v) and there is an equivariant isomorphism between the tree X and the Bass–Serre covering tree . More precisely, there is a group isomorphism σ: G → π1(A, v) and a graph isomorphism such that for every g in G, for every vertex x of X and for every edge e of X we have j(gx) = g j(x) and j(ge) = g j(e).
This result is also known as the structure theorem.
One of the immediate consequences is the classic Kurosh subgroup theorem describing the algebraic structure of subgroups of free products.
Examples
Amalgamated free product
Consider a graph of groups A consisting of a single non-loop edge e (together with its formal inverse ) with two distinct end-vertices u = o(e) and v = t(e), vertex groups H = Au, K = Av, an edge group C = Ae and the boundary monomorphisms . Then T = A is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the amalgamated free product
In this case the Bass–Serre tree can be described as follows. The vertex set of X is the set of cosets
Two vertices gK and fH are adjacent in X whenever there exists k ∈ K such that fH = gkH (or, equivalently, whenever there is h ∈ H such that gK = fhK).
The G-stabilizer of every vertex of X of type gK is equal to gKg−1 and the G-stabilizer of every vertex of X of type gH is equal to gHg−1. For an edge [gH, ghK] of X its G-stabilizer is equal to ghα(C)h−1g−1.
For every c ∈ C and h ∈ 'k ∈ K' the edges [gH, ghK] and [gH, ghα(c)K] are equal and the degree of the vertex gH in X is equal to the index [H:α(C)]. Similarly, every vertex of type gK has degree [K:ω(C)] in X.
HNN extension
Let A be a graph of groups consisting of a single loop-edge e (together with its formal inverse ), a single vertex v = o(e) = t(e), a vertex group B = Av, an edge group C = Ae and the boundary monomorphisms . Then T = v is a spanning tree in A and the fundamental group π1(A, T) is isomorphic to the HNN extension
with the base group B, stable letter e and the associated subgroups H = α(C), K = ω(C) in B. The composition is an isomorphism and the above HNN-extension presentation of G can be rewritten as
In this case the Bass–Serre tree can be described as follows. The vertex set of X is the set of cosets VX = {gB : g ∈ G}.
Two vertices gB and fB are adjacent in X whenever there exists b in B such that either fB = gbeB or fB = gbe−1B. The G-stabilizer of every vertex of X is conjugate to B in G and the stabilizer of every edge of X is conjugate to H in G. Every vertex of X has degree equal to [B : H] + [B : K].
A graph with the trivial graph of groups structure
Let A be a graph of groups with underlying graph A such that all the vertex and edge groups in A are trivial. Let v be a base-vertex in A. Then π1(A,v) is equal to the fundamental group π1(A,v) of the underlying graph A in the standard sense of algebraic topology and the Bass–Serre covering tree is equal to the standard universal covering space of A. Moreover, the action of π1(A,v) on is exactly the standard action of π1(A,v) on by deck transformations.
Basic facts and properties
If A is a graph of groups with a spanning tree T and if G = π1(A, T), then for every vertex v of A the canonical homomorphism from Av to G is injective.
If g ∈ G is an element of finite order then g is conjugate in G to an element of finite order in some vertex group Av.
If F ≤ G is a finite subgroup then F is conjugate in G to a subgroup of some vertex group Av.
If the graph A is finite and all vertex groups Av are finite then the group G is virtually free, that is, G contains a free subgroup of finite index.
If A is finite and all the vertex groups Av are finitely generated then G is finitely generated.
If A is finite and all the vertex groups Av are finitely presented and all the edge groups Ae are finitely generated then G is finitely presented.
Trivial and nontrivial actions
A graph of groups A is called trivial if A = T is already a tree and there is some vertex v of A such that Av = π1(A, A). This is equivalent to the condition that A is a tree and that for every edge e = [u, z] of A (with o(e) = u, t(e) = z) such that u is closer to v than z we have [Az : ωe(Ae)] = 1, that is Az = ωe(Ae).
An action of a group G on a tree X without edge-inversions is called trivial if there exists a vertex x of X that is fixed by G, that is such that Gx = x. It is known that an action of G on X is trivial if and only if the quotient graph of groups for that action is trivial.
Typically, only nontrivial actions on trees are studied in Bass–Serre theory since trivial graphs of groups do not carry any interesting algebraic information, although trivial actions in the above sense (e. g. actions of groups by automorphisms on rooted trees) may also be interesting for other mathematical reasons.
One of the classic and still important results of the theory is a theorem of Stallings about ends of groups. The theorem states that a finitely generated group has more than one end if and only if this group admits a nontrivial splitting over finite subgroups that is, if and only if the group admits a nontrivial action without inversions on a tree with finite edge stabilizers.
An important general result of the theory states that if G is a group with Kazhdan's property (T) then G does not admit any nontrivial splitting, that is, that any action of G on a tree X without edge-inversions has a global fixed vertex.
Hyperbolic length functions
Let G be a group acting on a tree X without edge-inversions.
For every g∈G put
Then ℓX(g) is called the translation length of g on X.
The function
is called the hyperbolic length function or the translation length function for the action of G on X.
Basic facts regarding hyperbolic length functions
For g ∈ G exactly one of the following holds:
(a) ℓX(g) = 0 and g fixes a vertex of G. In this case g is called an elliptic element of G.
(b) ℓX(g) > 0 and there is a unique bi-infinite embedded line in X, called the axis of g and denoted Lg which is g-invariant. In this case g acts on Lg by translation of magnitude ℓX(g) and the element g ∈ G is called hyperbolic.
If ℓX(G) ≠ 0 then there exists a unique minimal G-invariant subtree XG of X. Moreover, XG is equal to the union of axes of hyperbolic elements of G.
The length-function ℓX : G → Z is said to be abelian if it is a group homomorphism from G to Z and non-abelian otherwise. Similarly, the action of G on X is said to be abelian if the associated hyperbolic length function is abelian and is said to be non-abelian otherwise.
In general, an action of G on a tree X without edge-inversions is said to be minimal if there are no proper G-invariant subtrees in X.
An important fact in the theory says that minimal non-abelian tree actions are uniquely determined by their hyperbolic length functions:
Uniqueness theorem
Let G be a group with two nonabelian minimal actions without edge-inversions on trees X and Y. Suppose that the hyperbolic length functions ℓX and ℓY on G are equal, that is ℓX(g) = ℓY(g) for every g ∈ G. Then the actions of G on X and Y are equal in the sense that there exists a graph isomorphism f : X → Y which is G-equivariant, that is f(gx) = g f(x) for every g ∈ G and every x ∈ VX.
Important developments in Bass–Serre theory
Important developments in Bass–Serre theory in the last 30 years include:
Various accessibility results for finitely presented groups that bound the complexity (that is, the number of edges) in a graph of groups decomposition of a finitely presented group, where some algebraic or geometric restrictions on the types of groups considered are imposed. These results include:
Dunwoody's theorem about accessibility of finitely presented groups stating that for any finitely presented group G there exists a bound on the complexity of splittings of G over finite subgroups (the splittings are required to satisfy a technical assumption of being "reduced");
Bestvina–Feighn generalized accessibility theorem stating that for any finitely presented group G there is a bound on the complexity of reduced splittings of G over small subgroups (the class of small groups includes, in particular, all groups that do not contain non-abelian free subgroups);
Acylindrical accessibility results for finitely presented (Sela, Delzant) and finitely generated (Weidmann) groups which bound the complexity of the so-called acylindrical splittings, that is splittings where for their Bass–Serre covering trees the diameters of fixed subsets of nontrivial elements of G are uniformly bounded.
The theory of JSJ-decompositions for finitely presented groups. This theory was motivated by the classic notion of JSJ decomposition in 3-manifold topology and was initiated, in the context of word-hyperbolic groups, by the work of Sela. JSJ decompositions are splittings of finitely presented groups over some classes of small subgroups (cyclic, abelian, noetherian, etc., depending on the version of the theory) that provide a canonical descriptions, in terms of some standard moves, of all splittings of the group over subgroups of the class. There are a number of versions of JSJ-decomposition theories:
The initial version of Sela for cyclic splittings of torsion-free word-hyperbolic groups.
Bowditch's version of JSJ theory for word-hyperbolic groups (with possible torsion) encoding their splittings over virtually cyclic subgroups.
The version of Rips and Sela of JSJ decompositions of torsion-free finitely presented groups encoding their splittings over free abelian subgroups.
The version of Dunwoody and Sageev of JSJ decompositions of finitely presented groups over noetherian subgroups.
The version of Fujiwara and Papasoglu, also of JSJ decompositions of finitely presented groups over noetherian subgroups.
A version of JSJ decomposition theory for finitely presented groups developed by Scott and Swarup.
The theory of lattices in automorphism groups of trees. The theory of tree lattices was developed by Bass, Kulkarni and Lubotzky by analogy with the theory of lattices in Lie groups (that is discrete subgroups of Lie groups of finite co-volume). For a discrete subgroup G of the automorphism group of a locally finite tree X one can define a natural notion of volume for the quotient graph of groups A as
The group G is called an X-lattice if vol(A)< ∞. The theory of tree lattices turns out to be useful in the study of discrete subgroups of algebraic groups over non-archimedean local fields and in the study of Kac–Moody groups.
Development of foldings and Nielsen methods for approximating group actions on trees and analyzing their subgroup structure.
The theory of ends and relative ends of groups, particularly various generalizations of Stallings theorem about groups with more than one end.
Quasi-isometric rigidity results for groups acting on trees.
Generalizations
There have been several generalizations of Bass–Serre theory:
The theory of complexes of groups (see Haefliger, Corson Bridson-Haefliger) provides a higher-dimensional generalization of Bass–Serre theory. The notion of a graph of groups is replaced by that of a complex of groups, where groups are assigned to each cell in a simplicial complex, together with monomorphisms between these groups corresponding to face inclusions (these monomorphisms are required to satisfy certain compatibility conditions). One can then define an analog of the fundamental group of a graph of groups for a complex of groups. However, in order for this notion to have good algebraic properties (such as embeddability of the vertex groups in it) and in order for a good analog for the notion of the Bass–Serre covering tree to exist in this context, one needs to require some sort of "non-positive curvature" condition for the complex of groups in question (see, for example ).
The theory of isometric group actions on real trees (or R-trees) which are metric spaces generalizing the graph-theoretic notion of a tree (graph theory). The theory was developed largely in the 1990s, where the Rips machine of Eliyahu Rips on the structure theory of stable group actions on R-trees played a key role (see Bestvina-Feighn). This structure theory assigns to a stable isometric action of a finitely generated group G a certain "normal form" approximation of that action by a stable action of G on a simplicial tree and hence a splitting of G in the sense of Bass–Serre theory. Group actions on real trees arise naturally in several contexts in geometric topology: for example as boundary points of the Teichmüller space (every point in the Thurston boundary of the Teichmüller space is represented by a measured geodesic lamination on the surface; this lamination lifts to the universal cover of the surface and a naturally dual object to that lift is an R-tree endowed with an isometric action of the fundamental group of the surface), as Gromov-Hausdorff limits of, appropriately rescaled, Kleinian group actions, and so on. The use of R-trees machinery provides substantial shortcuts in modern proofs of Thurston's Hyperbolization Theorem for Haken 3-manifolds. Similarly, R-trees play a key role in the study of Culler-Vogtmann's Outer space as well as in other areas of geometric group theory; for example, asymptotic cones of groups often have a tree-like structure and give rise to group actions on real trees. The use of R-trees, together with Bass–Serre theory, is a key tool in the work of Sela on solving the isomorphism problem for (torsion-free) word-hyperbolic groups, Sela's version of the JSJ-decomposition theory and the work of Sela on the Tarski Conjecture for free groups and the theory of limit groups.
The theory of group actions on Λ-trees, where Λ is an ordered abelian group (such as R or Z) provides a further generalization of both the Bass–Serre theory and the theory of group actions on R-trees (see Morgan, Alperin-Bass, Chiswell).
See also
Geometric group theory
References
Group theory
Geometric group theory | Bass–Serre theory | [
"Physics",
"Mathematics"
] | 5,265 | [
"Geometric group theory",
"Group actions",
"Group theory",
"Fields of abstract algebra",
"Symmetry"
] |
18,674,239 | https://en.wikipedia.org/wiki/Weemote | The Weemote is a television remote control made by Fobis Technologies that was designed for young children.
Design
The Weemote was designed for younger children to limit their ability to surf television channels, and also to partially serve as a learning tool. The remote looks like a toy with buttons that are different colors and specific shapes. Each button can be programmed to a specific television channel. There are several variants of the product, Weemote 2, an updated version, and Weemote Sr., intended for the elderly.
Trademark violation claims against Nintendo
The term "Weemote" was originally trademarked in 2000 by Fobis Technologies. While spelled differently, the term "Weemote" is phonetically identical to "Wiimote", the unofficial term for the Wii Remote, Nintendo's controller for the Wii which debuted six years later in 2006. Fobis Technologies claims this to be trademark infringement, however Nintendo does not actually use the term "Wiimote" in official promotional materials; many retailers that sell the Wii Remote do use the term. Fobis sent out up to 100 cease and desist letters to retailers and have made offers to Nintendo for them to purchase the trademark. Nintendo declined the offer, stating that it "does not use and does not plan to use the Weemote trademark".
References
Television technology
Remote control | Weemote | [
"Technology"
] | 277 | [
"Information and communications technology",
"Television technology"
] |
18,675,609 | https://en.wikipedia.org/wiki/Light-emitting%20electrochemical%20cell | A light-emitting electrochemical cell (LEC or LEEC) is a solid-state device that generates light from an electric current (electroluminescence). LECs are usually composed of two metal electrodes connected by (e.g. sandwiching) an organic semiconductor containing mobile ions. Aside from the mobile ions, their structure is very similar to that of an organic light-emitting diode (OLED).
LECs have most of the advantages of OLEDs, as well as additional ones:
The device is less dependent on the difference in work function of the electrodes. Consequently, the electrodes can be made of the same material (e.g. gold). Similarly, the device can still be operated at low voltages.
Recently developed materials such as graphene or a blend of carbon nanotubes and polymers have been used as electrodes, eliminating the need for using indium tin oxide for a transparent electrode.
The thickness of the active electroluminescent layer is not critical for the device to operate. This means that:
LECs can be printed with relatively inexpensive printing processes (where control over film thicknesses can be difficult).
In a planar device configuration, internal device operation can be observed directly.
There are two distinct types of LECs, those based on inorganic transition metal complexes (iTMC) or light emitting polymers. iTMC devices are often more efficient than their LEP based counterparts due to the emission mechanism being phosphorescent rather than fluorescent.
While electroluminescence had been seen previously in similar devices, the invention of the polymer LEC is attributed to Pei et al. Since then, numerous research groups and a few companies have worked on improving and commercializing the devices.
In 2012 the first inherently stretchable LEC using an elastomeric emissive material (at room temperature) was reported. Dispersing an ionic transition metal complex into an elastomeric matrix enables the fabrication of intrinsically stretchable light-emitting devices that possess large emission areas (~175 mm2) and tolerate linear strains up to 27% and repetitive cycles of 15% strain. This work demonstrates the suitability of this approach to new applications in conformable lighting that require uniform, diffuse light emission over large areas.
In 2012 fabrication of organic light-emitting electrochemical cells (LECs) using a roll-to-roll compatible process under ambient conditions was reported.
In 2017, a new design approach developed by a team of Swedish researchers promised to deliver substantially higher efficiency: 99.2 cd A−1 at a bright luminance of 1910 cd m−2.
See also
Electrochemical cell
Electrochemiluminescence
Light-emitting diode
Organic light-emitting diode
Photoelectrolysis
References
Display technology
Molecular electronics
Conductive polymers | Light-emitting electrochemical cell | [
"Chemistry",
"Materials_science",
"Engineering"
] | 573 | [
"Molecular physics",
"Molecular electronics",
"Electronic engineering",
"Display technology",
"Nanotechnology",
"Conductive polymers"
] |
18,679,245 | https://en.wikipedia.org/wiki/Unpaired%20electron | In chemistry, an unpaired electron is an electron that occupies an orbital of an atom singly, rather than as part of an electron pair. Each atomic orbital of an atom (specified by the three quantum numbers n, l and m) has a capacity to contain two electrons (electron pair) with opposite spins. As the formation of electron pairs is often energetically favourable, either in the form of a chemical bond or as a lone pair, unpaired electrons are relatively uncommon in chemistry, because an entity that carries an unpaired electron is usually rather reactive. In organic chemistry they typically only occur briefly during a reaction on an entity called a radical; however, they play an important role in explaining reaction pathways.
Radicals are uncommon in s- and p-block chemistry, since the unpaired electron occupies a valence p orbital or an sp, sp2 or sp3 hybrid orbital. These orbitals are strongly directional and therefore overlap to form strong covalent bonds, favouring dimerisation of radicals. Radicals can be stable if dimerisation would result in a weak bond or the unpaired electrons are stabilised by delocalisation. In contrast, radicals in d- and f-block chemistry are very common. The less directional, more diffuse d and f orbitals, in which unpaired electrons reside, overlap less effectively, form weaker bonds and thus dimerisation is generally disfavoured. These d and f orbitals also have comparatively smaller radial extension, disfavouring overlap to form dimers.
Relatively more stable entities with unpaired electrons do exist, e.g. the nitric oxide molecule has one. According to Hund's rule, the spins of unpaired electrons are aligned parallel and this gives these molecules paramagnetic properties.
The most stable examples of unpaired electrons are found on the atoms and ions of lanthanides and actinides. The incomplete f-shell of these entities does not interact very strongly with the environment they are in and this prevents them from being paired. The ions with the largest number of unpaired electrons are Gd3+ and Cm3+ with seven unpaired electrons.
An unpaired electron has a magnetic dipole moment, while an electron pair has no dipole moment because the two electrons have opposite spins so their magnetic dipole fields are in opposite directions and cancel. Thus an atom with unpaired electrons acts as a magnetic dipole and interacts with a magnetic field. Only elements with unpaired electrons exhibit paramagnetism, ferromagnetism, and antiferromagnetism.
References
Quantum chemistry
Chemical bonding | Unpaired electron | [
"Physics",
"Chemistry",
"Materials_science"
] | 540 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
"Physical chemistry stubs",
" and optical physics"
] |
18,680,489 | https://en.wikipedia.org/wiki/45%20Aquilae | 45 Aquilae, abbreviated 45 Aql, is a triple star system in the equatorial constellation of Aquila. 45 Aquilae is its Flamsteed designation. It is located away from Earth, give or take a 6 light-year margin of error, and has a combined apparent visual magnitude of 5.7. The system is moving closer to the Earth with a heliocentric radial velocity of -46 km/s.
Based upon a stellar classification of A3 IV, the primary component of this system is a subgiant star that is in the process of evolving away from the main sequence. The star has 2.6 times the mass of the Sun and is spinning with a projected rotational velocity of 75 km/s. It has an orbiting companion with a period of 20.31 years and an eccentricity of 0.054. At an angular separation of 42.2 arcseconds from this pair is a 12.7 magnitude tertiary companion.
References
External links
HR 7480
CCDM 19407-0037
Image 45 Aquilae
A-type subgiants
Triple star systems
Aquila (constellation)
Durchmusterung objects
Aquilae, 45
185762
096807
7480 | 45 Aquilae | [
"Astronomy"
] | 253 | [
"Aquila (constellation)",
"Constellations"
] |
27,124,959 | https://en.wikipedia.org/wiki/Sakuma%E2%80%93Hattori%20equation | In physics, the Sakuma–Hattori equation is a mathematical model for predicting the amount of thermal radiation, radiometric flux or radiometric power emitted from a perfect blackbody or received by a thermal radiation detector.
History
The Sakuma–Hattori equation was first proposed by Fumihiro Sakuma, Akira Ono and Susumu Hattori in 1982. In 1996, a study investigated the usefulness of various forms of the Sakuma–Hattori equation. This study showed the Planckian form to provide the best fit for most applications. This study was done for 10 different forms of the Sakuma–Hattori equation containing not more than three fitting variables. In 2008, BIPM CCT-WG5 recommended its use for radiation thermometry measurement uncertainty budgets below 960 °C.
General form
The Sakuma–Hattori equation gives the electromagnetic signal from thermal radiation based on an object's temperature. The signal can be electromagnetic flux or signal produced by a detector measuring this radiation. It has been suggested that below the silver point, a method using the Sakuma–Hattori equation be used. In its general form it looks like
where:
is the scalar coefficient
is the second radiation constant (0.014387752 m⋅K)
is the temperature-dependent effective wavelength (in meters)
is the absolute temperature (in K)
Planckian form
Derivation
The Planckian form is realized by the following substitution:
Making this substitution renders the following the Sakuma–Hattori equation in the Planckian form.
Sakuma–Hattori equation (Planckian form)
Inverse equation
First derivative
Discussion
The Planckian form is recommended for use in calculating uncertainty budgets for radiation thermometry and infrared thermometry. It is also recommended for use in calibration of radiation thermometers below the silver point.
The Planckian form resembles Planck's law.
However the Sakuma–Hattori equation becomes very useful when considering low-temperature, wide-band radiation thermometry. To use Planck's law over a wide spectral band, an integral like the following would have to be considered:
This integral yields an incomplete polylogarithm function, which can make its use very cumbersome.
The standard numerical treatment expands the incomplete integral in a geometric series of the exponential
after substituting Then
provides an approximation if the sum is truncated at some order.
The Sakuma–Hattori equation shown above was found to provide the best curve-fit for interpolation of scales for radiation thermometers among a number of alternatives investigated.
The inverse Sakuma–Hattori function can be used without iterative calculation. This is an additional advantage over integration of Planck's law.
Other forms
The 1996 paper investigated 10 different forms. They are listed in the chart below in order of quality of curve-fit to actual radiometric data.
See also
Stefan–Boltzmann law
Planck's law
Rayleigh–Jeans law
Wien approximation
Wien's displacement law
Kirchhoff's law of thermal radiation
Infrared thermometer
Pyrometer
Thin-filament pyrometry
Thermography
Black body
Thermal radiation
Radiance
Emissivity
ASTM Subcommittee E20.02 on Radiation Thermometry
Notes
References
Statistical mechanics
Equations
1982 in science | Sakuma–Hattori equation | [
"Physics",
"Mathematics"
] | 677 | [
"Statistical mechanics",
"Mathematical objects",
"Equations"
] |
27,126,857 | https://en.wikipedia.org/wiki/Norm%20residue%20isomorphism%20theorem | In mathematics, the norm residue isomorphism theorem is a long-sought result relating Milnor K-theory and Galois cohomology. The result has a relatively elementary formulation and at the same time represents the key juncture in the proofs of many seemingly unrelated theorems from abstract algebra, theory of quadratic forms, algebraic K-theory and the theory of motives. The theorem asserts that a certain statement holds true for any prime and any natural number . John Milnor speculated that this theorem might be true for and all , and this question became known as Milnor's conjecture. The general case was conjectured by Spencer Bloch and Kazuya Kato and became known as the Bloch–Kato conjecture or the motivic Bloch–Kato conjecture to distinguish it from the Bloch–Kato conjecture on values of L-functions. The norm residue isomorphism theorem was proved by Vladimir Voevodsky using a number of highly innovative results of Markus Rost.
Statement
For any integer ℓ invertible in a field there is a map
where denotes the Galois module of ℓ-th roots of unity in some separable closure of k. It induces an isomorphism . The first hint that this is related to K-theory is that is the group K1(k). Taking the tensor products and applying the multiplicativity of étale cohomology yields an extension of the map to maps:
These maps have the property that, for every element a in , vanishes. This is the defining relation of Milnor K-theory. Specifically, Milnor K-theory is defined to be the graded parts of the ring:
where is the tensor algebra of the multiplicative group and the quotient is by the two-sided ideal generated by all elements of the form . Therefore the map factors through a map:
This map is called the Galois symbol or norm residue map. Because étale cohomology with mod-ℓ coefficients is an ℓ-torsion group, this map additionally factors through .
The norm residue isomorphism theorem (or Bloch–Kato conjecture) states that for a field k and an integer ℓ that is invertible in k, the norm residue map
from Milnor K-theory mod-ℓ to étale cohomology is an isomorphism. The case is the Milnor conjecture, and the case is the Merkurjev–Suslin theorem.
History
The étale cohomology of a field is identical to Galois cohomology, so the conjecture equates the ℓth cotorsion (the quotient by the subgroup of ℓ-divisible elements) of the Milnor K-group of a field k with the Galois cohomology of k with coefficients in the Galois module of ℓth roots of unity. The point of the conjecture is that there are properties that are easily seen for Milnor K-groups but not for Galois cohomology, and vice versa; the norm residue isomorphism theorem makes it possible to apply techniques applicable to the object on one side of the isomorphism to the object on the other side of the isomorphism.
The case when n is 0 is trivial, and the case when follows easily from Hilbert's Theorem 90. The case and was proved by . An important advance was the case and ℓ arbitrary. This case was proved by and is known as the Merkurjev–Suslin theorem. Later, Merkurjev and Suslin, and independently, Rost, proved the case and .
The name "norm residue" originally referred to the Hilbert symbol , which takes values in the Brauer group of k (when the field contains all ℓ-th roots of unity). Its usage here is in analogy with standard local class field theory and is expected to be part of an (as yet undeveloped) "higher" class field theory.
The norm residue isomorphism theorem implies the Quillen–Lichtenbaum conjecture. It is equivalent to a theorem whose statement was once referred to as the Beilinson–Lichtenbaum conjecture.
History of the proof
Milnor's conjecture was proved by Vladimir Voevodsky.
Later Voevodsky proved the general Bloch–Kato conjecture.
The starting point for the proof is a series of conjectures due to and . They conjectured the existence of motivic complexes, complexes of sheaves whose cohomology was related to motivic cohomology. Among the conjectural properties of these complexes were three properties: one connecting their Zariski cohomology to Milnor's K-theory, one connecting their etale cohomology to cohomology with coefficients in the sheaves of roots of unity and one connecting their Zariski cohomology to their etale cohomology. These three properties implied, as a very special case, that the norm residue map should be an isomorphism. The essential characteristic of the proof is that it uses the induction on the "weight" (which equals the dimension of the cohomology group in the conjecture) where the inductive step requires knowing not only the statement of Bloch-Kato conjecture but the much more general statement that contains a large part of the Beilinson-Lichtenbaum conjectures. It often occurs in proofs by induction that the statement being proved has to be strengthened in order to prove the inductive step. In this case the strengthening that was needed required the development of a very large amount of new mathematics.
The earliest proof of Milnor's conjecture is contained in a 1995 preprint of Voevodsky and is inspired by the idea that there should be algebraic analogs of Morava K-theory (these algebraic Morava K-theories were later constructed by Simone Borghesi). In a 1996 preprint, Voevodsky was able to remove Morava K-theory from the picture by introducing instead algebraic cobordisms and using some of their properties that were not proved at that time (these properties were proved later). The constructions of 1995 and 1996 preprints are now known to be correct but the first completed proof of Milnor's conjecture used a somewhat different scheme.
It is also the scheme that the proof of the full Bloch–Kato conjecture follows. It was devised by Voevodsky a few months after the 1996 preprint appeared. Implementing this scheme required making substantial advances in the field of motivic homotopy theory as well as finding a way to build algebraic varieties with a specified list of properties. From the motivic homotopy theory the proof required the following:
A construction of the motivic analog of the basic ingredient of the Spanier–Whitehead duality in the form of the motivic fundamental class as a morphism from the motivic sphere to the Thom space of the motivic normal bundle over a smooth projective algebraic variety.
A construction of the motivic analog of the Steenrod algebra.
A proof of the proposition stating that over a field of characteristic zero the motivic Steenrod algebra characterizes all bi-stable cohomology operations in the motivic cohomology.
The first two constructions were developed by Voevodsky by 2003. Combined with the results that had been known since late 1980s, they were sufficient to reprove the Milnor conjecture.
Also in 2003, Voevodsky published on the web a preprint that nearly contained a proof of the general theorem. It followed the original scheme but was missing the proofs of three statements. Two of these statements were related to the properties of the motivic Steenrod operations and required the third fact above, while the third one required then-unknown facts about "norm varieties". The properties that these varieties were required to have had been formulated by Voevodsky in 1997, and the varieties themselves had been constructed by Markus Rost in 1998–2003. The proof that they have the required properties was completed by Andrei Suslin and Seva Joukhovitski in 2006.
The third fact above required the development of new techniques in motivic homotopy theory. The goal was to prove that a functor, which was not assumed to commute with limits or colimits, preserved weak equivalences between objects of a certain form. One of the main difficulties there was that the standard approach to the study of weak equivalences is based on Bousfield–Quillen factorization systems and model category structures, and these were inadequate. Other methods had to be developed, and this work was completed by Voevodsky only in 2008.
In the course of developing these techniques, it became clear that the first statement used without proof in Voevodsky's 2003 preprint is false. The proof had to be modified slightly to accommodate the corrected form of that statement. While Voevodsky continued to work out the final details of the proofs of the main theorems about motivic Eilenberg–MacLane spaces, Charles Weibel invented an approach to correct the place in the proof that had to modified. Weibel also published in 2009 a paper that contained a summary of Voevodsky's constructions combined with the correction that he discovered.
Beilinson–Lichtenbaum conjecture
Let X be a smooth variety over a field containing . Beilinson and Lichtenbaum conjectured that the motivic cohomology group is isomorphic to the étale cohomology group when p≤q. This conjecture has now been proven, and is equivalent to the norm residue isomorphism theorem.
References
Bibliography
Conjectures that have been proved
Algebraic K-theory
Theorems in algebraic topology
Theorems in algebra | Norm residue isomorphism theorem | [
"Mathematics"
] | 1,976 | [
"Mathematical theorems",
"Theorems in algebra",
"Theorems in topology",
"Conjectures that have been proved",
"Mathematical problems",
"Theorems in algebraic topology",
"Algebra"
] |
27,132,938 | https://en.wikipedia.org/wiki/MasterSpec | MasterSpec is a master guide building and construction specification system used within the United States by architects, engineers, landscape architects, and interior designers to express results expected in construction. MasterSpec content and software is exclusively developed and distributed by Deltek (formerly Avitru) for the American Institute of Architects (AIA). It was developed in 1969 by the AIA to provide architects a means to create technical specifications without spending a lot of time researching products and writing up to date technical specifications from scratch. Content for MasterSpec is vetted by AIA-sponsored architectural and engineering review committees. In 2019, the company was acquired by Deltek, Inc.
Content Libraries
Today, MasterSpec consists of over 900 sections packaged in practice-specific libraries, following the MasterFormat 2018 standard:
Landscape
Site/Civil
Structural
Historic Preservation
Commissioning
Interiors
Mechanical
Electrical + Communication
Architectural
Building Architecture + Engineering
Each MasterSpec section is organized into three parts following SectionFormat and consists of 5 components:
Summary - Overview of section scope and content
Evaluations - Qualitative overview of products and discussion of recent technologies, including:
Testing procedures and applicable codes
Application and implementation suggestions
Environmental considerations, green building, or LEED information
References and standards
Links to the manufacturer and standards organizations
Master guide technical specifications in three-part CSI format along with editor's notes (instructions) and cross-references to Evaluations.
Drawing Coordination Checklist: - Checklist of items to coordinate section with the drawings.
Specification Coordination Checklist - Checklist of items to coordinate this section with other sections.
Formats
The MasterSpec technical specifications are available in three distinct formats or type:
Full Length - For moderate- to large-scale, complex projects and varied bidding and contracting situations
Short Form - Abridged versions of the sections with most common products
Outline - Corresponding outline specifications for use during design development and schematic phases
Timeline
1969: AIA’s MasterSpec is first distributed in paper form and includes Architectural and Civil & Structural Engineering Content
1973: ARCOM provides MasterSpec in ASCII format on magnetic tape
1984: MasterSpec is distributed only in floppy disc format and paper
1988: AIA assigns ARCOM to be the exclusive developer and distributor of all electronic and paper versions of MasterSpec; AIA dissolves contracts with all other “Automators,” asking them to work under ARCOM
1995: AIA awards ARCOM exclusive license to develop and distribute MasterSpec
1999: ARCOM introduces MasterSpec on CD-ROM
2005: ARCOM issues MasterSpec in MasterFormat 2006 a major change in the 40 year old organization of specifications
2017:
ARCOM acquires InterSpec. This brings e-SPECS and specification services to ARCOM
ARCOM changes name to Avitru as part of acquiring InterSpec
2019: Deltek acquires Avitru
References
External links
Official website
Architectural design | MasterSpec | [
"Engineering"
] | 586 | [
"Design",
"Architectural design",
"Architecture"
] |
25,279,290 | https://en.wikipedia.org/wiki/Aevum | In scholastic philosophy, the aevum (also called aeviternity) is the temporal mode of existence experienced by angels and by the saints in heaven. In some ways, it is a state that logically lies between the eternity (timelessness) of God and the temporal experience of material beings. It is sometimes referred to as "improper eternity".
Etymology
The word aevum is Latin, originally signifying "age", "aeon", or "everlasting time"; the word aeviternity comes from the Medieval Latin neologism aeviternitas.
History
The concept of the aevum dates back at least to Albertus Magnus's treatise De quattuor coaequaevis. Its most familiar description is found in the Summa theologica of Thomas Aquinas. Aquinas identifies the aevum as the measure of the existence of beings that "recede less from permanence of being, forasmuch as their being neither consists in change, nor is the subject of change; nevertheless they have change annexed to them either actually, or potentially". As examples, he cites the heavenly bodies (which, in medieval science, were considered changeless in their nature, though variable in their position) and the angels, which "have an unchangeable being as regards their nature with changeableness as regards choice".
Contemporary philosophy
Frank Sheed, in his book Theology and Sanity, said that the aevum is also the measure of existence for the saints in heaven:
References
Angels in Christianity
Concepts in metaphysics
Christian saints
Heaven in Christianity
Infinity
Philosophy of time
Scholasticism | Aevum | [
"Physics",
"Mathematics"
] | 337 | [
"Physical quantities",
"Time",
"Mathematical objects",
"Infinity",
"Philosophy of time",
"Spacetime"
] |
25,279,655 | https://en.wikipedia.org/wiki/Carbide-derived%20carbon | Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is the common term for carbon materials derived from carbide precursors, such as binary (e.g. SiC, TiC), or ternary carbides, also known as MAX phases (e.g., Ti2AlC, Ti3SiC2). CDCs have also been derived from polymer-derived ceramics such as Si-O-C or Ti-C, and carbonitrides, such as Si-N-C. CDCs can occur in various structures, ranging from amorphous to crystalline carbon, from sp2- to sp3-bonded, and from highly porous to fully dense. Among others, the following carbon structures have been derived from carbide precursors: micro- and mesoporous carbon, amorphous carbon, carbon nanotubes, onion-like carbon, nanocrystalline diamond, graphene, and graphite. Among carbon materials, microporous CDCs exhibit some of the highest reported specific surface areas (up to more than 3000 m2/g). By varying the type of the precursor and the CDC synthesis conditions, microporous and mesoporous structures with controllable average pore size and pore size distributions can be produced. Depending on the precursor and the synthesis conditions, the average pore size control can be applied at sub-Angstrom accuracy. This ability to precisely tune the size and shapes of pores makes CDCs attractive for selective sorption and storage of liquids and gases (e.g., hydrogen, methane, CO2) and the high electric conductivity and electrochemical stability allows these structures to be effectively implemented in electrical energy storage and capacitive water desalinization.
History
The production of SiCl4 by high temperature reaction of chlorine gas with silicon carbide was first patented in 1918 by Otis Hutchins, with the process further optimized for higher yields in 1956. The solid porous carbon product was initially regarded as a waste byproduct until its properties and potential applications were investigated in more detail in 1959 by Walter Mohun. Research was carried out in the 1960-1980s mostly by Russian scientists on the synthesis of CDC via halogen treatment, while hydrothermal treatment was explored as an alternative route to derive CDCs in the 1990s. Most recently, research activities have centered on optimized CDC synthesis and nanoengineered CDC precursors.
Nomenclature
Historically, various terms have been used for CDC, such as "mineral carbon" or "nanoporous carbon". Later, a more adequate nomenclature introduced by Yury Gogotsi was adopted that clearly denotes the precursor. For example, CDC derived from silicon carbide has been referred to as SiC-CDC, Si-CDC, or SiCDC. Recently, it was recommended to adhere to a unified precursor-CDC-nomenclature to reflect the chemical composition of the precursor (e.g., B4C-CDC, Ti3SiC2-CDC, W2C-CDC).
Synthesis
CDCs have been synthesized using several chemical and physical synthesis methods. Most commonly, dry chlorine treatment is used to selectively etch metal or metalloid atoms from the carbide precursor lattice. The term "chlorine treatment" is to be preferred over chlorination as the chlorinated product, metal chloride, is the discarded byproduct and the carbon itself remains largely unreacted. This method is implemented for commercial production of CDC by Skeleton in Estonia and Carbon-Ukraine. Hydrothermal etching has also been used for synthesis of SiC-CDC which yielded a route for porous carbon films and nanodiamond synthesis.
Chlorine treatment
The most common method for producing porous carbide-derived carbons involves high-temperature etching with halogens, most commonly chlorine gas. The following generic equation describes the reaction of a metal carbide with chlorine gas (M: Si, Ti, V; similar equations can be written for other CDC precursors):
MC (solid) + 2 Cl2 (gas) → MCl4(gas) + C (solid)
Halogen treatment at temperatures between 200 and 1000 °C has been shown to yield mostly disordered porous carbons with a porosity between 50 and ~80 vol% depending on the precursor. Temperatures above 1000 °C result in predominantly graphitic carbon and an observed shrinkage of the material due to graphitization.
The linear growth rate of the solid carbon product phase suggests a reaction-driven kinetic mechanism, but the kinetics become diffusion-limited for thicker films or larger particles. A high mass transport condition (high gas flow rates) facilitates the removal of the chloride and shifts the reaction equilibrium towards the CDC product. Chlorine treatment has successfully been employed for CDC synthesis from a variety of carbide precursors, including SiC, TiC, B4C, BaC2, CaC2, Cr3C2, Fe3C, Mo2C, Al4C3, Nb2C, SrC2, Ta2C, VC, WC, W2C, ZrC, ternary carbides such as Ti2AlC, Ti3AlC2, and Ti3SiC2, and carbonitrides such as Ti2AlC0.5N0.5.
Most produced CDCs exhibit a prevalence of micropores (< 2 nm) and mesopores (between 2 and 50 nm), with specific distributions affected by carbide precursor and synthesis conditions. Hierarchic porosity can be achieved by using polymer-derived ceramics with or without utilizing a templating method. Templating yields an ordered array of mesopores in addition to the disordered network of micropores.
It has been shown that the initial crystal structure of the carbide is the primary factor affecting the CDC porosity, especially for low-temperature chlorine treatment. In general, a larger spacing between carbon atoms in the lattice correlates with an increase in the average pore diameter. As the synthesis temperature increases, the average pore diameter increases, while the pore size distribution becomes broader. The overall shape and size of the carbide precursor, however, is largely maintained and CDC formation is usually referred to as a conformal process.
Vacuum decomposition
Metal or metalloid atoms from carbides can selectively be extracted at high temperatures (usually above 1200 °C) under vacuum. The underlying mechanism is incongruent decomposition of carbides, using the high melting point of carbon compared to corresponding carbide metals that melt and eventually evaporate away, leaving the carbon behind.
Like halogen treatment, vacuum decomposition is a conformal process. The resulting carbon structures are, as a result of the higher temperatures, more ordered, and carbon nanotubes and graphene can be obtained. In particular, vertically aligned carbon nanotubes films of high tube density have been reported for vacuum decomposition of SiC. The high tube density translates into a high elastic modulus and high buckling resistance which is of particular interest for mechanical and tribological applications.
While carbon nanotube formation occurs when trace oxygen amounts are present, very high vacuum conditions (approaching 10−8–10−10 torr) result in the formation of graphene sheets. If the conditions are maintained, graphene transitions into bulk graphite. In particular, by vacuum annealing silicon carbide single crystals (wafers) at 1200–1500 °C, metal/metalloid atoms are selectively removed and a layer of 1–3 layer graphene (depending on the treatment time) is formed, undergoing a conformal transformation of 3 layers of silicon carbide into one monolayer of graphene. Also, graphene formation occurs preferentially on the Si-face of the 6H-SiC crystals, while nanotube growth is favored on the c-face of SiC.
Hydrothermal decomposition
The removal of metal atoms from carbides has been reported at high temperatures (300–1000 °C) and pressures (2–200 MPa). The following reactions are possible between metal carbides and water:
MC + x H2O → MOx + CH4
MC + (x+1) H2O → MOx + CO + (x+1) H2
MC + (x+2) H2O → MOx + CO2 + (x+2) H2
MC + x H2O → MOx + C + x H2
Only the last reaction yields solid carbon. The yield of carbon-containing gases increases with pressure (decreasing solid carbon yield) and decreases with temperatures (increasing the carbon yield). The ability to produce a usable porous carbon material is dependent on the solubility of the formed metal oxide (such as SiO2) in supercritical water. Hydrothermal carbon formation has been reported for SiC, TiC, WC, TaC, and NbC. Insolubility of metal oxides, for example TiO2, is a significant complication for certain metal carbides (e.g., Ti3SiC2).
Applications
One application of carbide-derived carbons is as active material in electrodes for electric double layer capacitors which have become commonly known as supercapacitors or ultracapacitors. This is motivated by their good electrical conductivity combined with high surface area, large micropore volume, and pore size control that enable to match the porosity metrics of the porous carbon electrode to a certain electrolyte. In particular, when the pore size approaches the size of the (desolvated) ion in the electrolyte, there is a significant increase in the capacitance. The electrically conductive carbon material minimizes resistance losses in supercapacitor devices and enhances charge screening and confinement, maximizing the packing density and subsequent charge storage capacity of microporous CDC electrodes.
CDC electrodes have been shown to yield a gravimetric capacitance of up to 190 F/g in aqueous electrolytes and 180 F/g in organic electrolytes. The highest capacitance values are observed for matching ion/pore systems, which allow high-density packing of ions in pores in superionic states. However, small pores, especially when combined with an overall large particle diameter, impose an additional diffusion limitation on the ion mobility during charge/discharge cycling. The prevalence of mesopores in the CDC structure allows for more ions to move past each other during charging and discharging, allowing for faster scan rates and improved rate handling abilities. Conversely, by implementing nanoparticle carbide precursors, shorter pore channels allow for higher electrolyte mobility, resulting in faster charge/discharge rates and higher power densities.
Proposed applications
Gas storage and carbon dioxide capturing
TiC-CDC activated with KOH or CO2 store up to 21 wt.% of methane at 25 °C at high pressure. CDCs with subnanometer pores in the 0.50–0.88 nm diameter range have shown to store up to 7.1 mol CO2/kg at 1 bar and 0 °C. CDCs also store up to 3 wt.% hydrogen at 60 bar and −196 °C, with additional increases possible as a result of chemical or physical activation of the CDC materials. SiOC-CDC with large subnanometer pore volumes are able to store over 5.5 wt.% hydrogen at 60 bar and −196 °C, almost reaching the goal of the US Department of Energy of 6 wt.% storage density for automotive applications. Methane storage densities of over 21.5 wt.% can be achieved for this material at those conditions. In particular, a predominance of pores with subnanometer diameters and large pore volumes are instrumental towards increasing storage densities.
Tribological coatings
CDC films obtained by vacuum annealing (ESK) or chlorine treatment of SiC ceramics yield a low friction coefficient. The friction coefficient of SiC, which is widely used in tribological applications for its high mechanical strength and hardness, can therefore decrease from ~0.7 to ~0.2 or less under dry conditions. It’s important to mention that graphite cannot operate in dry environments. The porous 3-dimensional network of CDC allows for high ductility and an increased mechanical strength, minimizing fracture of the film under an applied force. Those coatings find applications in dynamic seals. The friction properties can be further tailored with high-temperature hydrogen annealing and subsequent hydrogen termination of dangling bonds.
Protein adsorption
Carbide-derived carbons with a mesoporous structure remove large molecules from biofluids. As other carbons, CDCs possess good biocompatibility. CDCs have been demonstrated to remove cytokines such as TNF-alpha, IL-6, and IL-1beta from blood plasma. These are the most common receptor-binding agents released into the body during a bacterial infection that cause the primary inflammatory response during the attack and increase the potential lethality of sepsis, making their removal a very important concern. The rates and levels of removal of above cytokines (85–100% removed within 30 minutes) are higher than those observed for comparable activated carbons.
Catalyst support
Pt nanoparticles can be introduced to the SiC/C interface during chlorine treatment (in the form of Pt3Cl3). The particles diffuse through the material to form Pt particle surfaces, which may serve as catalyst support layers. In particular, in addition to Pt, other noble elements such as gold can be deposited into the pores, with the resulting nanoparticle size controlled by the pore size and overall pore size distribution of the CDC substrate. Such gold or platinum nanoparticles can be smaller than 1 nm even without employing surface coatings. Au nanoparticles in different CDCs (TiC-CDC, Mo2C-CDC, B4C-CDC) catalyze the oxidation of carbon monoxide.
Capacitive deionization (CDI)
As desalinization and purification of water is critical for obtaining deionized water for laboratory research, large-scale chemical synthesis in industry and consumer applications, the use of porous materials for this application has received particular interest. Capacitive deionization operates in a fashion with similarities to a supercapacitor. As an ion-containing water (electrolyte) is flown between two porous electrodes with an applied potential across the system, the corresponding ions assemble into a double layer in the pores of the two terminals, decreasing the ion content in the liquid exiting the purification device. Due to the ability of carbide-derived carbons to closely match the size of ions in the electrolyte, side-by-side comparisons of desalinization devices based on CDCs and activated carbon showed a significant efficiency increase in the 1.2–1.4 V range compared to activated carbon.
Commercial production and applications
Having originated as the by-product of industrial metal chloride synthesis, CDC has certainly a potential for large-scale production at a moderate cost. Currently, only small companies engage in production of carbide-derived carbons and their implementation in commercial products. For example, Skeleton, which is located in Tartu, Estonia, and Carbon-Ukraine, located in Kyiv, Ukraine, have a diverse product line of porous carbons for supercapacitors, gas storage, and filtration applications. In addition, numerous education and research institutions worldwide are engaged in basic research of CDC structure, synthesis, or (indirectly) their application for various high-end applications.
See also
Hydrogen storage
Hydrogen economy
Nanotechnology
Nanomaterials
Nanoengineering
Allotropes of carbon
References
External links
http://nano.materials.drexel.edu
http://skeletontech.com/
http://carbon.org.ua/
Allotropes of carbon
Capacitors
Nanomaterials | Carbide-derived carbon | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,323 | [
"Allotropes of carbon",
"Physical quantities",
"Allotropes",
"Capacitors",
"Capacitance",
"Nanotechnology",
"Nanomaterials"
] |
25,280,985 | https://en.wikipedia.org/wiki/Transition-edge%20sensor | A transition-edge sensor (TES) is a type of cryogenic energy sensor or cryogenic particle detector that exploits the strongly temperature-dependent resistance of the superconducting phase transition.
History
The first demonstrations of the superconducting transition's measurement potential appeared in the 1940s, 30 years after Onnes's discovery of superconductivity. D. H. Andrews demonstrated the first transition-edge bolometer, a current-biased tantalum wire which he used to measure an infrared signal. Subsequently he demonstrated a transition-edge calorimeter made of niobium nitride which was used to measure alpha particles. However, the TES detector did not gain popularity for about 50 years, due primarily to the difficulty in stabilizing the temperature within the narrow superconducting transition region, especially when more than one pixel was operated at the same time, and also due to the difficulty of signal readout from such a low-impedance system. Joule heating in a current-biased TES can lead to thermal runaway that drives the detector into the normal (non-superconducting) state, a phenomenon known as positive electrothermal feedback. The thermal runaway problem was solved in 1995 by K. D. Irwin by voltage-biasing the TES, establishing stable negative electrothermal feedback, and coupling them to superconducting quantum interference devices (SQUID) current amplifiers. This breakthrough has led to widespread adoption of TES detectors.
Setup, operation, and readout
The TES is voltage-biased by driving a current source Ibias through a load resistor RL (see figure). The voltage is chosen to put the TES in its so-called "self-biased region" where the power dissipated in the device is constant with the applied voltage. When a photon is absorbed by the TES, this extra power is removed by negative electrothermal feedback: the TES resistance increases, causing a drop in TES current; the Joule power in turn drops, cooling the device back to its equilibrium state in the self-biased region. In a common SQUID readout system, the TES is operated in series with the input coil L, which is inductively coupled to a SQUID series-array. Thus a change in TES current manifests as a change in the input flux to the SQUID, whose output is further amplified and read by room-temperature electronics.
Functionality
Any bolometric sensor employs three basic components: an absorber of incident energy, a thermometer for measuring this energy, and a thermal link to base temperature to dissipate the absorbed energy and cool the detector.
Absorber
The simplest absorption scheme can be applied to TESs operating in the near-IR, optical, and UV regimes. These devices generally utilize a tungsten TES as its own absorber, which absorbs up to 20% of the incident radiation. If high-efficiency detection is desired, the TES may be fabricated in a multi-layer optical cavity tuned to the desired operating wavelength and employing a backside mirror and frontside anti-reflection coating. Such techniques can decrease the transmission and reflection from the detectors to negligibly low values; 95% detection efficiency has been observed. At higher energies, the primary obstacle to absorption is transmission, not reflection, and thus an absorber with high photon stopping power and low heat capacity is desirable; a bismuth film is often employed. Any absorber should have low heat capacity with respect to the TES. Higher heat capacity in the absorber will contribute to noise and decrease the sensitivity of the detector (since a given absorbed energy will not produce as large of a change in TES resistance). For far-IR radiation into the millimeter range, the absorption schemes commonly employ antennas or feedhorns.
Thermometer
The TES operates as a thermometer in the following manner: absorbed incident energy increases the resistance of the voltage-biased sensor within its transition region, and the integral of the resulting drop in current is proportional to the energy absorbed by the detector. The output signal is proportional to the temperature change of the absorber, and thus for maximal sensitivity, a TES should have low heat capacity and a narrow transition. Important TES properties including not only heat capacity but also thermal conductance are strongly temperature dependent, so the choice of transition temperature Tc is critical to the device design. Furthermore, Tc should be chosen to accommodate the available cryogenic system. Tungsten has been a popular choice for elemental TESs as thin-film tungsten displays two phases, one with Tc ~15 mK and the other with Tc ~1–4 K, which can be combined to finely tune the overall device Tc. Bilayer and multilayer TESs are another popular fabrication approach, where thin films of different materials are combined to achieve the desired Tc.
Thermal conductance
Finally, it is necessary to tune the thermal coupling between the TES and the bath of cooling liquid; a low thermal conductance is necessary to ensure that incident energy is seen by the TES rather than being lost directly to the bath. However, the thermal link must not be too weak, as it is necessary to cool the TES back to bath temperature after the energy has been absorbed. Two approaches to control the thermal link are by electron–phonon coupling and by mechanical machining. At cryogenic temperatures, the electron and phonon systems in a material can become only weakly coupled. The electron–phonon thermal conductance is strongly temperature-dependent, and hence the thermal conductance can be tuned by adjusting Tc. Other devices use mechanical means of controlling the thermal conductance such as building the TES on a sub-micrometre membrane over a hole in the substrate or in the middle of a sparse "spiderweb" structure.
Advantages and disadvantages
TES detectors are attractive to the scientific community for a variety of reasons. Among their most striking attributes are an unprecedented high detection efficiency customizable to wavelengths from the millimeter regime to gamma rays and a theoretical negligible background dark count level (less than 1 event in 1000 s from intrinsic thermal fluctuations of the device). (In practice, although only a real energy signal will create a current pulse, a nonzero background level may be registered by the counting algorithm or the presence of background light in the experimental setup. Even thermal blackbody radiation may be seen by a TES optimized for use in the visible regime.)
TES single-photon detectors suffer nonetheless from a few disadvantages as compared to their avalanche photodiode (APD) counterparts. APDs are manufactured in small modules, which count photons out-of-the-box with a dead time of a few nanoseconds and output a pulse corresponding to each photon with a jitter of tens of picoseconds. In contrast, TES detectors must be operated in a cryogenic environment, output a signal that must be further analyzed to identify photons, and have a jitter of approximately 100 ns. Furthermore, a single-photon spike on a TES detector lasts on the order of microseconds.
Applications
TES arrays are becoming increasingly common in physics and astronomy experiments such as SCUBA-2, the HAWC+ instrument on the Stratospheric Observatory for Infrared Astronomy, the Atacama Cosmology Telescope, the Cryogenic Dark Matter Search, the Cryogenic Observatory for Signatures Seen in Next-Generation Underground Searches, the Cryogenic Rare Event Search with Superconducting Thermometers, the E and B Experiment, the South Pole Telescope, the Spider polarimeter, the X-IFU instrument of the Advanced Telescope for High Energy Astrophysics satellite, the future LiteBIRD Cosmic Microwave Background polarization experiment, the Simons Observatory, and the CMB Stage-IV Experiment.
See also
Bolometer
Cryogenic particle detectors
References
Superconducting detectors
Radiometry
Sensors
Particle detectors | Transition-edge sensor | [
"Materials_science",
"Technology",
"Engineering"
] | 1,612 | [
"Telecommunications engineering",
"Superconductivity",
"Particle detectors",
"Measuring instruments",
"Sensors",
"Superconducting detectors",
"Radiometry"
] |
25,287,133 | https://en.wikipedia.org/wiki/Anywhere%20on%20Earth | Anywhere on Earth (AoE) is a calendar designation that indicates that a period expires when the date passes everywhere on Earth. It is a practice to help specify deadlines such as "March 16, 2004, End of Day, Anywhere on Earth (AoE)" without requiring timezone calculations or Daylight saving time adjustments.
For any given date, the latest place on Earth where it would be valid is on Howland and Baker Islands, in the IDLW time zone (the Western Hemisphere side of the International Date Line). Therefore, the day ends AoE when it ends on Howland Island.
The convention originated in IEEE 802.16 balloting procedures. Many IEEE 802 ballot deadlines are established as the end of day using "AoE", for "Anywhere on Earth" as a designation. This means that the deadline has not passed if, anywhere on Earth, the deadline date has not yet passed.
The day's end AoE occurs at noon Coordinated Universal Time (UTC) of the following day, Howland and Baker Islands being halfway around the world from the prime meridian that is the base reference longitude for UTC. Thus, in standard notation this is:
UTC−12:00 (daylight saving time [DST] is not applicable)
References
External links
IEEE 802.16 AOE Deadline Documentation — IEEE802.org
Time zone names - Date Line West — WorldTimeZone.com
AoE – Anywhere on Earth (Standard Time) — TimeAndDate.com
Timezone of "AoE" for a conference submission deadline? — StackExchange.com
Calendars
Time zones | Anywhere on Earth | [
"Physics"
] | 328 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
25,287,284 | https://en.wikipedia.org/wiki/Hessenberg%20variety | In geometry, Hessenberg varieties, first studied by Filippo De Mari, Claudio Procesi, and Mark A. Shayman, are a family of subvarieties of the full flag variety which are defined by a Hessenberg function h and a linear transformation X. The study of Hessenberg varieties was first motivated by questions in numerical analysis in relation to algorithms for computing eigenvalues and eigenspaces of the linear operator X. Later work by T. A. Springer, Dale Peterson, Bertram Kostant, among others, found connections with combinatorics, representation theory and cohomology.
Definitions
A Hessenberg function is a map
such that
for each i. For example, the function that sends the numbers 1 to 5 (in order) to 2, 3, 3, 4, and 5 is a Hessenberg function.
For any Hessenberg function h and a linear transformation
the Hessenberg variety is the set of all flags such that
for all i.
Examples
Some examples of Hessenberg varieties (with their function) include:
The Full Flag variety: h(i) = n for all i
The Peterson variety: for
The Springer variety: for all .
References
Bertram Kostant, Flag manifold quantum cohomology, the Toda lattice, and the representation with highest weight , Selecta Mathematica (N.S.) 2, 1996, 43–91.
Julianna Tymoczko, Linear conditions imposed on flag varieties, American Journal of Mathematics 128 (2006), 1587–1604.
Algebraic geometry
Algebraic combinatorics | Hessenberg variety | [
"Mathematics"
] | 316 | [
"Fields of abstract algebra",
"Algebraic combinatorics",
"Algebraic geometry",
"Combinatorics"
] |
25,287,905 | https://en.wikipedia.org/wiki/Didesmethylcitalopram | Didesmethylcitalopram is an active metabolite of the antidepressant drug citalopram (racemic). Didesmethylescitalopram is an active metabolite of the antidepressant escitalopram, the S-enantiomer of citalopram. Like citalopram and escitalopram, didesmethyl(es)citalopram functions as a selective serotonin reuptake inhibitor (SSRI), and is responsible for some of its parents' therapeutic benefits.
See also
Desmethylcitalopram
Desmethylsertraline
Desmethylvenlafaxine
Norfluoxetine
References
Isobenzofurans
Nitriles
4-Fluorophenyl compounds
Human drug metabolites | Didesmethylcitalopram | [
"Chemistry"
] | 174 | [
"Chemicals in medicine",
"Nitriles",
"Functional groups",
"Human drug metabolites"
] |
1,275,975 | https://en.wikipedia.org/wiki/Disruptive%20selection | In evolutionary biology, disruptive selection, also called diversifying selection, describes changes in population genetics in which extreme values for a trait are favored over intermediate values. In this case, the variance of the trait increases and the population is divided into two distinct groups. In this more individuals acquire peripheral character value at both ends of the distribution curve.
Overview
Natural selection is known to be one of the most important biological processes behind evolution. There are many variations of traits, and some cause greater or lesser reproductive success of the individual. The effect of selection is to promote certain alleles, traits, and individuals that have a higher chance to survive and reproduce in their specific environment. Since the environment has a carrying capacity, nature acts on this mode of selection on individuals to let only the most fit offspring survive and reproduce to their full potential. The more advantageous the trait is the more common it will become in the population. Disruptive selection is a specific type of natural selection that actively selects against the intermediate in a population, favoring both extremes of the spectrum.
Disruptive selection is inferred to oftentimes lead to sympatric speciation through a phyletic gradualism mode of evolution. Disruptive selection can be caused or influenced by multiple factors and also have multiple outcomes, in addition to speciation. Individuals within the same environment can develop a preference for extremes of a trait, against the intermediate. Selection can act on having divergent body morphologies in accessing food, such as beak and dental structure. It is seen that often this is more prevalent in environments where there is not a wide clinal range of resources, causing heterozygote disadvantage or selection favoring homozygotes.
Niche partitioning allows for selection of differential patterns of resource usage, which can drive speciation. To the contrast, niche conservation pulls individuals toward ancestral ecological traits in an evolutionary tug-of-war. Also, nature tends to have a 'jump on the band wagon' perspective when something beneficial is found. This can lead to the opposite occurring with disruptive selection eventually selecting against the average; when everyone starts taking advantage of that resource it will become depleted and the extremes will be favored. Furthermore, gradualism is a more realistic view when looking at speciation as compared to punctuated equilibrium.
Disruptive selection can initially rapidly intensify divergence; this is because it is only manipulating alleles that already exist. Often it is not creating new ones by mutation which takes a long time. Usually complete reproductive isolation does not occur until many generations, but behavioral or morphological differences separate the species from reproducing generally. Furthermore, generally hybrids have reduced fitness which promotes reproductive isolation.
Example
Suppose there is a population of rabbits. The colour of the rabbits is governed by two incompletely dominant traits: black fur, represented by "B", and white fur, represented by "b". A rabbit in this population with a genotype of "BB" would have a phenotype of black fur, a genotype of "Bb" would have grey fur (a display of both black and white), and a genotype of "bb" would have white fur.
If this population of rabbits occurred in an environment that had areas of black rocks as well as areas of white rocks, the rabbits with black fur would be able to hide from predators amongst the black rocks, and the rabbits with white fur likewise amongst the white rocks. The rabbits with grey fur, however, would stand out in all areas of the habitat, and would thereby suffer greater predation.
As a consequence of this type of selective pressure, our hypothetical rabbit population would be disruptively selected for extreme values of the fur colour trait: white or black, but not grey. This is an example of underdominance (heterozygote disadvantage) leading to disruptive selection.
Sympatric speciation
It is believed that disruptive selection is one of the main forces that drive sympatric speciation in natural populations. The pathways that lead from disruptive selection to sympatric speciation seldom are prone to deviation; such speciation is a domino effect that depends on the consistency of each distinct variable. These pathways are the result of disruptive selection in intraspecific competition; it may cause reproductive isolation, and finally culminate in sympatric speciation.
It is important to keep in mind that disruptive selection does not always have to be based on intraspecific competition. It is also important to know that this type of natural selection is similar to the other ones. Where it is not the major factor, intraspecific competition can be discounted in assessing the operative aspects of the course of adaptation. For example, what may drive disruptive selection instead of intraspecific competition might be polymorphisms that lead to reproductive isolation, and thence to speciation.
When disruptive selection is based on intraspecific competition, the resulting selection in turn promotes ecological niche diversification and polymorphisms. If multiple morphs (phenotypic forms) occupy different niches, such separation could be expected to promote reduced competition for resources. Disruptive selection is seen more often in high density populations rather than in low density populations because intraspecific competition tends to be more intense within higher density populations. This is because higher density populations often imply more competition for resources. The resulting competition drives polymorphisms to exploit different niches or changes in niches in order to avoid competition. If one morph has no need for resources used by another morph, then it is likely that neither would experience pressure to compete or interact, thereby supporting the persistence and possibly the intensification of the distinctness of the two morphs within the population. This theory does not necessarily have a lot of supporting evidence in natural populations, but it has been seen many times in experimental situations using existing populations. These experiments further support that, under the right situations (as described above), this theory could prove to be true in nature.
When intraspecific competition is not at work disruptive selection can still lead to sympatric speciation and it does this through maintaining polymorphisms. Once the polymorphisms are maintained in the population, if assortative mating is taking place, then this is one way that disruptive selection can lead in the direction of sympatric speciation. If different morphs have different mating preferences then assortative mating can occur, especially if the polymorphic trait is a "magic trait", meaning a trait that is under ecological selection and in turn has a side effect on reproductive behavior. In a situation where the polymorphic trait is not a magic trait then there has to be some kind of fitness penalty for those individuals who do not mate assortatively and a mechanism that causes assortative mating has to evolve in the population. For example, if a species of butterflies develops two kinds of wing patterns, crucial to mimicry purposes in their preferred habitat, then mating between two butterflies of different wing patterns leads to an unfavorable heterozygote. Therefore, butterflies will tend to mate with others of the same wing pattern promoting increased fitness, eventually eliminating the heterozygote altogether. This unfavorable heterozygote generates pressure for a mechanism that cause assortative mating which will then lead to reproductive isolation due to the production of post-mating barriers. It is actually fairly common to see sympatric speciation when disruptive selection is supporting two morphs, specifically when the phenotypic trait affects fitness rather than mate choice.
In both situations, one where intraspecific competition is at work and the other where it is not, if all these factors are in place, they will lead to reproductive isolation, which can lead to sympatric speciation.
Other outcomes
polymorphism
sexual dimorphism
phenotypic plasticity
Significance
Disruptive selection is of particular significance in the history of evolutionary study, as it is involved in one of evolution's "cardinal cases", namely the finch populations observed by Darwin in the Galápagos.
He observed that the species of finches were similar enough to ostensibly have been descended from a single species. However, they exhibited disruptive variation in beak size. This variation appeared to be adaptively related to the seed size available on the respective islands (big beaks for big seeds, small beaks for small seeds). Medium beaks had difficulty retrieving small seeds and were also not tough enough for the bigger seeds, and were hence maladaptive.
While it is true that disruptive selection can lead to speciation, this is not as quick or straightforward of a process as other types of speciation or evolutionary change. This introduces the topic of gradualism, which is a slow but continuous accumulation of changes over long periods of time. This is largely because the results of disruptive selection are less stable than the results of directional selection (directional selection favors individuals at only one end of the spectrum).
For example, let us take the mathematically straightforward yet biologically improbable case of the rabbits: Suppose directional selection were taking place. The field only has dark rocks in it, so the darker the rabbit, the more effectively it can hide from predators. Eventually there will be a lot of black rabbits in the population (hence many "B" alleles) and a lesser amount of grey rabbits (who contribute 50% chromosomes with "B" allele and 50% chromosomes with "b" allele to the population). There will be few white rabbits (not very many contributors of chromosomes with "b" allele to the population). This could eventually lead to a situation in which chromosomes with "b" allele die out, making black the only possible color for all subsequent rabbits. The reason for this is that there is nothing "boosting" the level of "b" chromosomes in the population. They can only go down, and eventually die out.
Consider now the case of disruptive selection. The result is equal numbers of black and white rabbits, and hence equal numbers of chromosomes with "B" or "b" allele, still floating around in that population. Every time a white rabbit mates with a black one, only gray rabbits results. So, in order for the results to "click", there needs to be a force causing white rabbits to choose other white rabbits, and black rabbits to choose other black ones. In the case of the finches, this "force" was geographic/niche isolation. This leads one to think that disruptive selection cannot happen and is normally because of species being geographically isolated, directional selection or by stabilising selection.
See also
Character displacement
Balancing selection
Directional selection
Negative selection (natural selection)
Stabilizing selection
Sympatric speciation
Fluctuating selection
Selection
References
Selection | Disruptive selection | [
"Biology"
] | 2,201 | [
"Evolutionary processes",
"Selection"
] |
1,276,320 | https://en.wikipedia.org/wiki/Proper%20transfer%20function | In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator. A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator.
The difference between the degree of the denominator (number of poles) and degree of the numerator (number of zeros) is the relative degree of the transfer function.
Example
The following transfer function:
is proper, because
.
is biproper, because
.
but is not strictly proper, because
.
The following transfer function is not proper (or strictly proper)
because
.
A not proper transfer function can be made proper by using the method of long division.
The following transfer function is strictly proper
because
.
Implications
A proper transfer function will never grow unbounded as the frequency approaches infinity:
A strictly proper transfer function will approach zero as the frequency approaches infinity (which is true for all physical processes):
Also, the integral of the real part of a strictly proper transfer function is zero.
References
Transfer functions - ECE 486: Control Systems Spring 2015, University of Illinois
ELEC ENG 4CL4: Control System Design Notes for Lecture #9, 2004, Dr. Ian C. Bruce, McMaster University
Control theory | Proper transfer function | [
"Mathematics"
] | 269 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
1,276,437 | https://en.wikipedia.org/wiki/Soil%20mechanics | Soil mechanics is a branch of soil physics and applied mechanics that describes the behavior of soils. It differs from fluid mechanics and solid mechanics in the sense that soils consist of a heterogeneous mixture of fluids (usually air and water) and particles (usually clay, silt, sand, and gravel) but soil may also contain organic solids and other matter. Along with rock mechanics, soil mechanics provides the theoretical basis for analysis in geotechnical engineering, a subdiscipline of civil engineering, and engineering geology, a subdiscipline of geology. Soil mechanics is used to analyze the deformations of and flow of fluids within natural and man-made structures that are supported on or made of soil, or structures that are buried in soils. Example applications are building and bridge foundations, retaining walls, dams, and buried pipeline systems. Principles of soil mechanics are also used in related disciplines such as geophysical engineering, coastal engineering, agricultural engineering, and hydrology.
This article describes the genesis and composition of soil, the distinction between pore water pressure and inter-granular effective stress, capillary action of fluids in the soil pore spaces, soil classification, seepage and permeability, time dependent change of volume due to squeezing water out of tiny pore spaces, also known as consolidation, shear strength and stiffness of soils. The shear strength of soils is primarily derived from friction between the particles and interlocking, which are very sensitive to the effective stress. The article concludes with some examples of applications of the principles of soil mechanics such as slope stability, lateral earth pressure on retaining walls, and bearing capacity of foundations.
Genesis and composition of soils
Genesis
The primary mechanism of soil creation is the weathering of rock. All rock types (igneous rock, metamorphic rock and sedimentary rock) may be broken down into small particles to create soil. Weathering mechanisms are physical weathering, chemical weathering, and biological weathering Human activities such as excavation, blasting, and waste disposal, may also create soil. Over geologic time, deeply buried soils may be altered by pressure and temperature to become metamorphic or sedimentary rock, and if melted and solidified again, they would complete the geologic cycle by becoming igneous rock.
Physical weathering includes temperature effects, freeze and thaw of water in cracks, rain, wind, impact and other mechanisms. Chemical weathering includes dissolution of matter composing a rock and precipitation in the form of another mineral. Clay minerals, for example can be formed by weathering of feldspar, which is the most common mineral present in igneous rock.
The most common mineral constituent of silt and sand is quartz, also called silica, which has the chemical name silicon dioxide. The reason that feldspar is most common in rocks but silica is more prevalent in soils is that feldspar is much more soluble than silica.
Silt, Sand, and Gravel are basically little pieces of broken rocks.
According to the Unified Soil Classification System, silt particle sizes are in the range of 0.002 mm to 0.075 mm and sand particles have sizes in the range of 0.075 mm to 4.75 mm.
Gravel particles are broken pieces of rock in the size range 4.75 mm to 100 mm. Particles larger than gravel are called cobbles and boulders.
Transport
Soil deposits are affected by the mechanism of transport and deposition to their location. Soils that are not transported are called residual soils—they exist at the same location as the rock from which they were generated. Decomposed granite is a common example of a residual soil. The common mechanisms of transport are the actions of gravity, ice, water, and wind. Wind blown soils include dune sands and loess. Water carries particles of different size depending on the speed of the water, thus soils transported by water are graded according to their size. Silt and clay may settle out in a lake, and gravel and sand collect at the bottom of a river bed. Wind blown soil deposits (aeolian soils) also tend to be sorted according to their grain size. Erosion at the base of glaciers is powerful enough to pick up large rocks and boulders as well as soil; soils dropped by melting ice can be a well graded mixture of widely varying particle sizes. Gravity on its own may also carry particles down from the top of a mountain to make a pile of soil and boulders at the base; soil deposits transported by gravity are called colluvium.
The mechanism of transport also has a major effect on the particle shape. For example, low velocity grinding in a river bed will produce rounded particles. Freshly fractured colluvium particles often have a very angular shape.
Soil composition
Soil mineralogy
Silts, sands and gravels are classified by their size, and hence they may consist of a variety of minerals. Owing to the stability of quartz compared to other rock minerals, quartz is the most common constituent of sand and silt. Mica, and feldspar are other common minerals present in sands and silts. The mineral constituents of gravel may be more similar to that of the parent rock.
The common clay minerals are montmorillonite or smectite, illite, and kaolinite or kaolin. These minerals tend to form in sheet or plate like structures, with length typically ranging between 10−7 m and 4x10−6 m and thickness typically ranging between 10−9 m and 2x10−6 m, and they have a relatively large specific surface area. The specific surface area (SSA) is defined as the ratio of the surface area of particles to the mass of the particles. Clay minerals typically have specific surface areas in the range of 10 to 1,000 square meters per gram of solid. Due to the large surface area available for chemical, electrostatic, and van der Waals interaction, the mechanical behavior of clay minerals is very sensitive to the amount of pore fluid available and the type and amount of dissolved ions in the pore fluid.
The minerals of soils are predominantly formed by atoms of oxygen, silicon, hydrogen, and aluminum, organized in various crystalline forms. These elements along with calcium, sodium, potassium, magnesium, and carbon constitute over 99 per cent of the solid mass of soils.
Grain size distribution
Soils consist of a mixture of particles of different size, shape and mineralogy. Because the size of the particles obviously has a significant effect on the soil behavior, the grain size and grain size distribution are used to classify soils. The grain size distribution describes the relative proportions of particles of various sizes. The grain size is often visualized in a cumulative distribution graph which, for example, plots the percentage of particles finer than a given size as a function of size. The median grain size, , is the size for which 50% of the particle mass consists of finer particles. Soil behavior, especially the hydraulic conductivity, tends to be dominated by the smaller particles, hence, the term "effective size", denoted by , is defined as the size for which 10% of the particle mass consists of finer particles.
Sands and gravels that possess a wide range of particle sizes with a smooth distribution of particle sizes are called well graded soils. If the soil particles in a sample are predominantly in a relatively narrow range of sizes, the sample is uniformly graded. If a soil sample has distinct gaps in the gradation curve, e.g., a mixture of gravel and fine sand, with no coarse sand, the sample may be gap graded. Uniformly graded and gap graded soils are both considered to be poorly graded. There are many methods for measuring particle-size distribution. The two traditional methods are sieve analysis and hydrometer analysis.
Sieve analysis
The size distribution of gravel and sand particles are typically measured using sieve analysis. The formal procedure is described in ASTM D6913-04(2009). A stack of sieves with accurately dimensioned holes between a mesh of wires is used to separate the particles into size bins. A known volume of dried soil, with clods broken down to individual particles, is put into the top of a stack of sieves arranged from coarse to fine. The stack of sieves is shaken for a standard period of time so that the particles are sorted into size bins. This method works reasonably well for particles in the sand and gravel size range. Fine particles tend to stick to each other, and hence the sieving process is not an effective method. If there are a lot of fines (silt and clay) present in the soil it may be necessary to run water through the sieves to wash the coarse particles and clods through.
A variety of sieve sizes are available. The boundary between sand and silt is arbitrary. According to the Unified Soil Classification System, a #4 sieve (4 openings per inch) having 4.75 mm opening size separates sand from gravel and a #200 sieve with an 0.075 mm opening separates sand from silt and clay. According to the British standard, 0.063 mm is the boundary between sand and silt, and 2 mm is the boundary between sand and gravel.
Hydrometer analysis
The classification of fine-grained soils, i.e., soils that are finer than sand, is determined primarily by their Atterberg limits, not by their grain size. If it is important to determine the grain size distribution of fine-grained soils, the hydrometer test may be performed. In the hydrometer tests, the soil particles are mixed with water and shaken to produce a dilute suspension in a glass cylinder, and then the cylinder is left to sit. A hydrometer is used to measure the density of the suspension as a function of time. Clay particles may take several hours to settle past the depth of measurement of the hydrometer. Sand particles may take less than a second. Stokes' law provides the theoretical basis to calculate the relationship between sedimentation velocity and particle size. ASTM provides the detailed procedures for performing the Hydrometer test.
Clay particles can be sufficiently small that they never settle because they are kept in suspension by Brownian motion, in which case they may be classified as colloids.
Mass-volume relations
There are a variety of parameters used to describe the relative proportions of air, water and solid in a soil. This section defines these parameters and some of their interrelationships. The basic notation is as follows:
, , and represent the volumes of air, water and solids in a soil mixture;
, , and represent the weights of air, water and solids in a soil mixture;
, , and represent the masses of air, water and solids in a soil mixture;
, , and represent the densities of the constituents (air, water and solids) in a soil mixture;
Note that the weights, W, can be obtained by multiplying the mass, M, by the acceleration due to gravity, g; e.g.,
Specific Gravity is the ratio of the density of one material compared to the density of pure water ().
Specific gravity of solids,
Note that specific weight, conventionally denoted by the symbol may be obtained by multiplying the density ( ) of a material by the acceleration due to gravity, .
Density, Bulk Density, or Wet Density, , are different names for the density of the mixture, i.e., the total mass of air, water, solids divided by the total volume of air water and solids (the mass of air is assumed to be zero for practical purposes):
Dry Density, , is the mass of solids divided by the total volume of air water and solids:
Buoyant Density, , defined as the density of the mixture minus the density of water is useful if the soil is submerged under water:
where is the density of water
Water Content, is the ratio of mass of water to mass of solid. It is easily measured by weighing a sample of the soil, drying it out in an oven and re-weighing. Standard procedures are described by ASTM.
Void ratio, , is the ratio of the volume of voids to the volume of solids:
Porosity, , is the ratio of volume of voids to the total volume, and is related to the void ratio:
Degree of saturation, , is the ratio of the volume of water to the volume of voids:
From the above definitions, some useful relationships can be derived by use of basic algebra.
Soil classification
Geotechnical engineers classify the soil particle types by performing tests on disturbed (dried, passed through sieves, and remolded) samples of the soil. This provides information about the characteristics of the soil grains themselves. Classification of the types of grains present in a soil does not account for important effects of the structure or fabric of the soil, terms that describe compactness of the particles and patterns in the arrangement of particles in a load carrying framework as well as the pore size and pore fluid distributions. Engineering geologists also classify soils based on their genesis and depositional history.
Classification of soil grains
In the US and other countries, the Unified Soil Classification System (USCS) is often used for soil classification. Other classification systems include the British Standard BS 5930 and the AASHTO soil classification system.
Classification of sands and gravels
In the USCS, gravels (given the symbol G) and sands (given the symbol S) are classified according to their grain size distribution. For the USCS, gravels may be given the classification symbol GW (well-graded gravel), GP (poorly graded gravel), GM (gravel with a large amount of silt), or GC (gravel with a large amount of clay). Likewise sands may be classified as being SW, SP, SM or SC. Sands and gravels with a small but non-negligible amount of fines (5–12%) may be given a dual classification such as SW-SC.
Atterberg limits
Clays and Silts, often called 'fine-grained soils', are classified according to their Atterberg limits; the most commonly used Atterberg limits are the Liquid Limit (denoted by LL or ), Plastic Limit (denoted by PL or ), and Shrinkage Limit (denoted by SL).
The Liquid Limit is the water content at which the soil behavior transitions from a plastic solid to a liquid. The Plastic Limit is the water content at which the soil behavior transitions from that of a plastic solid to a brittle solid. The Shrinkage Limit corresponds to a water content below which the soil will not shrink as it dries. The consistency of fine grained soil varies in proportional to the water content in a soil.
As the transitions from one state to another are gradual, the tests have adopted arbitrary definitions to determine the boundaries of the states. The liquid limit is determined by measuring the water content for which a groove closes after 25 blows in a standard test. Alternatively, a fall cone test apparatus may be used to measure the liquid limit. The undrained shear strength of remolded soil at the liquid limit is approximately 2 kPa. The Plastic Limit is the water content below which it is not possible to roll by hand the soil into 3 mm diameter cylinders. The soil cracks or breaks up as it is rolled down to this diameter. Remolded soil at the plastic limit is quite stiff, having an undrained shear strength of the order of about 200 kPa.
The Plasticity Index of a particular soil specimen is defined as the difference between the Liquid Limit and the Plastic Limit of the specimen; it is an indicator of how much water the soil particles in the specimen can absorb, and correlates with many engineering properties like permeability, compressibility, shear strength and others. Generally, the clay having high plasticity have lower permeability and also they are also difficult to be compacted.
Classification of silts and clays
According to the Unified Soil Classification System (USCS), silts and clays are classified by plotting the values of their plasticity index and liquid limit on a plasticity chart. The A-Line on the chart separates clays (given the USCS symbol C) from silts (given the symbol M). LL=50% separates high plasticity soils (given the modifier symbol H) from low plasticity soils (given the modifier symbol L). A soil that plots above the A-line and has LL>50% would, for example, be classified as CH. Other possible classifications of silts and clays are ML, CL and MH. If the Atterberg limits plot in the"hatched" region on the graph near the origin, the soils are given the dual classification 'CL-ML'.
Indices related to soil strength
Liquidity index
The effects of the water content on the strength of saturated remolded soils can be quantified by the use of the liquidity index, LI:
When the LI is 1, remolded soil is at the liquid limit and it has an undrained shear strength of about 2 kPa. When the soil is at the plastic limit, the LI is 0 and the undrained shear strength is about 200 kPa.
Relative density
The density of sands (cohesionless soils) is often characterized by the relative density,
where: is the "maximum void ratio" corresponding to a very loose state, is the "minimum void ratio" corresponding to a very dense state and is the in situ void ratio. Methods used to calculate relative density are defined in ASTM D4254-00(2006).
Thus if the sand or gravel is very dense, and if the soil is extremely loose and unstable.
Seepage: steady state flow of water
If fluid pressures in a soil deposit are uniformly increasing with depth according to
then hydrostatic conditions will prevail and the fluids will not be flowing through the soil. is the depth below the water table. However, if the water table is sloping or there is a perched water table as indicated in the accompanying sketch, then seepage will occur. For steady state seepage, the seepage velocities are not varying with time. If the water tables are changing levels with time, or if the soil is in the process of consolidation, then steady state conditions do not apply.
Darcy's law
Darcy's law states that the volume of flow of the pore fluid through a porous medium per unit time is proportional to the rate of change of excess fluid pressure with distance. The constant of proportionality includes the viscosity of the fluid and the intrinsic permeability of the soil. For the simple case of a horizontal tube filled with soil
The total discharge, (having units of volume per time, e.g., ft3/s or m3/s), is proportional to the intrinsic permeability, , the cross sectional area, , and rate of pore pressure change with distance, , and inversely proportional to the dynamic viscosity of the fluid, . The negative sign is needed because fluids flow from high pressure to low pressure. So if the change in pressure is negative (in the -direction) then the flow will be positive (in the -direction). The above equation works well for a horizontal tube, but if the tube was inclined so that point b was a different elevation than point a, the equation would not work. The effect of elevation is accounted for by replacing the pore pressure by excess pore pressure, defined as:
where is the depth measured from an arbitrary elevation reference (datum). Replacing by we obtain a more general equation for flow:
Dividing both sides of the equation by , and expressing the rate of change of excess pore pressure as a derivative, we obtain a more general equation for the apparent velocity in the x-direction:
where has units of velocity and is called the Darcy velocity (or the specific discharge, filtration velocity, or superficial velocity). The pore or interstitial velocity is the average velocity of fluid molecules in the pores; it is related to the Darcy velocity and the porosity through the Dupuit-Forchheimer relationship
(Some authors use the term seepage velocity to mean the Darcy velocity, while others use it to mean the pore velocity.)
Civil engineers predominantly work on problems that involve water and predominantly work on problems on earth (in earth's gravity). For this class of problems, civil engineers will often write Darcy's law in a much simpler form:
where is the hydraulic conductivity, defined as , and is the hydraulic gradient. The hydraulic gradient is the rate of change of total head with distance. The total head, at a point is defined as the height (measured relative to the datum) to which water would rise in a piezometer at that point. The total head is related to the excess water pressure by:
and the is zero if the datum for head measurement is chosen at the same elevation as the origin for the depth, z used to calculate .
Typical values of hydraulic conductivity
Values of hydraulic conductivity, , can vary by many orders of magnitude depending on the soil type. Clays may have hydraulic conductivity as small as about , gravels may have hydraulic conductivity up to about . Layering and heterogeneity and disturbance during the sampling and testing process make the accurate measurement of soil hydraulic conductivity a very difficult problem.
Flownets
Darcy's Law applies in one, two or three dimensions. In two or three dimensions, steady state seepage is described by Laplace's equation. Computer programs are available to solve this equation. But traditionally two-dimensional seepage problems were solved using a graphical procedure known as flownet. One set of lines in the flownet are in the direction of the water flow (flow lines), and the other set of lines are in the direction of constant total head (equipotential lines). Flownets may be used to estimate the quantity of seepage under dams and sheet piling.
Seepage forces and erosion
When the seepage velocity is great enough, erosion can occur because of the frictional drag exerted on the soil particles. Vertically upwards seepage is a source of danger on the downstream side of sheet piling and beneath the toe of a dam or levee. Erosion of the soil, known as "soil piping", can lead to failure of the structure and to sinkhole formation. Seeping water removes soil, starting from the exit point of the seepage, and erosion advances upgradient. The term "sand boil" is used to describe the appearance of the discharging end of an active soil pipe.
Seepage pressures
Seepage in an upward direction reduces the effective stress within the soil. When the water pressure at a point in the soil is equal to the total vertical stress at that point, the effective stress is zero and the soil has no frictional resistance to deformation. For a surface layer, the vertical effective stress becomes zero within the layer when the upward hydraulic gradient is equal to the critical gradient. At zero effective stress soil has very little strength and layers of relatively impermeable soil may heave up due to the underlying water pressures. The loss in strength due to upward seepage is a common contributor to levee failures. The condition of zero effective stress associated with upward seepage is also called liquefaction, quicksand, or a boiling condition. Quicksand was so named because the soil particles move around and appear to be 'alive' (the biblical meaning of 'quick' – as opposed to 'dead'). (Note that it is not possible to be 'sucked down' into quicksand. On the contrary, you would float with about half your body out of the water.)
Effective stress and capillarity: hydrostatic conditions
To understand the mechanics of soils it is necessary to understand how normal stresses and shear stresses are shared by the different phases. Neither gas nor liquid provide significant resistance to shear stress. The shear resistance of soil is provided by friction and interlocking of the particles. The friction depends on the intergranular contact stresses between solid particles. The normal stresses, on the other hand, are shared by the fluid and the particles. Although the pore air is relatively compressible, and hence takes little normal stress in most geotechnical problems, liquid water is relatively incompressible and if the voids are saturated with water, the pore water must be squeezed out in order to pack the particles closer together.
The principle of effective stress, introduced by Karl Terzaghi, states that the effective stress σ''' (i.e., the average intergranular stress between solid particles) may be calculated by a simple subtraction of the pore pressure from the total stress:
where σ is the total stress and u is the pore pressure. It is not practical to measure σ' directly, so in practice the vertical effective stress is calculated from the pore pressure and vertical total stress. The distinction between the terms pressure and stress is also important. By definition, pressure at a point is equal in all directions but stresses at a point can be different in different directions. In soil mechanics, compressive stresses and pressures are considered to be positive and tensile stresses are considered to be negative, which is different from the solid mechanics sign convention for stress.
Total stress
For level ground conditions, the total vertical stress at a point, , on average, is the weight of everything above that point per unit area. The vertical stress beneath a uniform surface layer with density , and thickness is for example:
where is the acceleration due to gravity, and is the unit weight of the overlying layer. If there are multiple layers of soil or water above the point of interest, the vertical stress may be calculated by summing the product of the unit weight and thickness of all of the overlying layers. Total stress increases with increasing depth in proportion to the density of the overlying soil.
It is not possible to calculate the horizontal total stress in this way. Lateral earth pressures are addressed elsewhere.
Pore water pressure
Hydrostatic conditions
If the soil pores are filled with water that is not flowing but is static, the pore water pressures will be hydrostatic. The water table is located at the depth where the water pressure is equal to the atmospheric pressure. For hydrostatic conditions, the water pressure increases linearly with depth below the water table:
where is the density of water, and is the depth below the water table.
Capillary action
Due to surface tension, water will rise up in a small capillary tube above a free surface of water. Likewise, water will rise up above the water table into the small pore spaces around the soil particles. In fact the soil may be completely saturated for some distance above the water table. Above the height of capillary saturation, the soil may be wet but the water content will decrease with elevation. If the water in the capillary zone is not moving, the water pressure obeys the equation of hydrostatic equilibrium, , but note that , is negative above the water table. Hence, hydrostatic water pressures are negative above the water table. The thickness of the zone of capillary saturation depends on the pore size, but typically, the heights vary between a centimeter or so for coarse sand to tens of meters for a silt or clay. In fact the pore space of soil is a uniform fractal e.g. a set of uniformly distributed D-dimensional fractals of average linear size L. For the clay soil it has been found that L=0.15 mm and D=2.7.
The surface tension of water explains why the water does not drain out of a wet sand castle or a moist ball of clay. Negative water pressures make the water stick to the particles and pull the particles to each other, friction at the particle contacts make a sand castle stable. But as soon as a wet sand castle is submerged below a free water surface, the negative pressures are lost and the castle collapses. Considering the effective stress equation, if the water pressure is negative, the effective stress may be positive, even on a free surface (a surface where the total normal stress is zero). The negative pore pressure pulls the particles together and causes compressive particle to particle contact forces.
Negative pore pressures in clayey soil can be much more powerful than those in sand. Negative pore pressures explain why clay soils shrink when they dry and swell as they are wetted. The swelling and shrinkage can cause major distress, especially to light structures and roads.
Later sections of this article address the pore water pressures for seepage and consolidation problems.
Consolidation: transient flow of water
Consolidation is a process by which soils decrease in volume. It occurs when stress is applied to a soil that causes the soil particles to pack together more tightly, therefore reducing volume. When this occurs in a soil that is saturated with water, water will be squeezed out of the soil. The time required to squeeze the water out of a thick deposit of clayey soil layer might be years. For a layer of sand, the water may be squeezed out in a matter of seconds. A building foundation or construction of a new embankment will cause the soil below to consolidate and this will cause settlement which in turn may cause distress to the building or embankment. Karl Terzaghi developed the theory of one-dimensional consolidation which enables prediction of the amount of settlement and the time required for the settlement to occur. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Soils are tested with an oedometer test to determine their compression index and coefficient of consolidation.
When stress is removed from a consolidated soil, the soil will rebound, drawing water back into the pores and regaining some of the volume it had lost in the consolidation process. If the stress is reapplied, the soil will re-consolidate again along a recompression curve, defined by the recompression index. Soil that has been consolidated to a large pressure and has been subsequently unloaded is considered to be overconsolidated. The maximum past vertical effective stress is termed the preconsolidation stress. A soil which is currently experiencing the maximum past vertical effective stress is said to be normally consolidated. The overconsolidation ratio, (OCR) is the ratio of the maximum past vertical effective stress to the current vertical effective stress. The OCR is significant for two reasons: firstly, because the compressibility of normally consolidated soil is significantly larger than that for overconsolidated soil, and secondly, the shear behavior and dilatancy of clayey soil are related to the OCR through critical state soil mechanics; highly overconsolidated clayey soils are dilatant, while normally consolidated soils tend to be contractive.
Shear behavior: stiffness and strength
The shear strength and stiffness of soil determines whether or not soil will be stable or how much it will deform. Knowledge of the strength is necessary to determine if a slope will be stable, if a building or bridge might settle too far into the ground, and the limiting pressures on a retaining wall. It is important to distinguish between failure of a soil element and the failure of a geotechnical structure (e.g., a building foundation, slope or retaining wall); some soil elements may reach their peak strength prior to failure of the structure. Different criteria can be used to define the "shear strength" and the "yield point" for a soil element from a stress–strain curve. One may define the peak shear strength as the peak of a stress–strain curve, or the shear strength at critical state as the value after large strains when the shear resistance levels off. If the stress–strain curve does not stabilize before the end of shear strength test, the "strength" is sometimes considered to be the shear resistance at 15–20% strain. The shear strength of soil depends on many factors including the effective stress and the void ratio.
The shear stiffness is important, for example, for evaluation of the magnitude of deformations of foundations and slopes prior to failure and because it is related to the shear wave velocity. The slope of the initial, nearly linear, portion of a plot of shear stress as a function of shear strain is called the shear modulus
Friction, interlocking and dilation
Soil is an assemblage of particles that have little to no cementation while rock (such as sandstone) may consist of an assembly of particles that are strongly cemented together by chemical bonds. The shear strength of soil is primarily due to interparticle friction and therefore, the shear resistance on a plane is approximately proportional to the effective normal stress on that plane. The angle of internal friction is thus closely related to the maximum stable slope angle, often called the angle of repose.
But in addition to friction, soil derives significant shear resistance from interlocking of grains. If the grains are densely packed, the grains tend to spread apart from each other as they are subject to shear strain. The expansion of the particle matrix due to shearing was called dilatancy by Osborne Reynolds. If one considers the energy required to shear an assembly of particles there is energy input by the shear force, T, moving a distance, x and there is also energy input by the normal force, N, as the sample expands a distance, y. Due to the extra energy required for the particles to dilate against the confining pressures, dilatant soils have a greater peak strength than contractive soils. Furthermore, as dilative soil grains dilate, they become looser (their void ratio increases), and their rate of dilation decreases until they reach a critical void ratio. Contractive soils become denser as they shear, and their rate of contraction decreases until they reach a critical void ratio.
The tendency for a soil to dilate or contract depends primarily on the confining pressure and the void ratio of the soil. The rate of dilation is high if the confining pressure is small and the void ratio is small. The rate of contraction is high if the confining pressure is large and the void ratio is large. As a first approximation, the regions of contraction and dilation are separated by the critical state line.
Failure criteria
After a soil reaches the critical state, it is no longer contracting or dilating and the shear stress on the failure plane
is determined by the effective normal stress on the failure plane
and critical state friction angle :
The peak strength of the soil may be greater, however, due to the interlocking (dilatancy) contribution.
This may be stated:
where . However, use of a friction angle greater than the critical state value for design requires care. The peak strength will not be mobilized everywhere at the same time in a practical problem such as a foundation, slope or retaining wall. The critical state friction angle is not nearly as variable as the peak friction angle and hence it can be relied upon with confidence.
Not recognizing the significance of dilatancy, Coulomb proposed that the shear strength of soil may be expressed as a combination of adhesion and friction components:
It is now known that the
and parameters in the last equation are not fundamental soil properties.Terzaghi, K., Peck, R.B., Mesri, G. (1996) Soil mechanics in Engineering Practice, Third Edition, John Wiley & Sons, Inc., In particular, and are different depending on the magnitude of effective stress. According to Schofield (2006), the longstanding use of in practice has led many engineers to wrongly believe that is a fundamental parameter. This assumption that and are constant can lead to overestimation of peak strengths.
Structure, fabric, and chemistry
In addition to the friction and interlocking (dilatancy) components of strength, the structure and fabric also play a significant role in the soil behavior. The structure and fabric include factors such as the spacing and arrangement of the solid particles or the amount and spatial distribution of pore water; in some cases cementitious material accumulates at particle-particle contacts. Mechanical behavior of soil is affected by the density of the particles and their structure or arrangement of the particles as well as the amount and spatial distribution of fluids present (e.g., water and air voids). Other factors include the electrical charge of the particles, chemistry of pore water, chemical bonds (i.e. cementation -particles connected through a solid substance such as recrystallized calcium carbonate)
Drained and undrained shear
The presence of nearly incompressible fluids such as water in the pore spaces affects the ability for the pores to dilate or contract.
If the pores are saturated with water, water must be sucked into the dilating pore spaces to fill the expanding pores (this phenomenon is visible at the beach when apparently dry spots form around feet that press into the wet sand).
Similarly, for contractive soil, water must be squeezed out of the pore spaces to allow contraction to take place.
Dilation of the voids causes negative water pressures that draw fluid into the pores, and contraction of the voids causes positive pore pressures to push the water out of the pores. If the rate of shearing is very large compared to the rate that water can be sucked into or squeezed out of the dilating or contracting pore spaces, then the shearing is called undrained shear, if the shearing is slow enough that the water pressures are negligible, the shearing is called drained shear. During undrained shear, the water pressure u changes depending on volume change tendencies. From the effective stress equation, the change in u directly effects the effective stress by the equation:
and the strength is very sensitive to the effective stress. It follows then that the undrained shear strength of a soil may be smaller or larger than the drained shear strength depending upon whether the soil is contractive or dilative.
Shear tests
Strength parameters can be measured in the laboratory using direct shear test, triaxial shear test, simple shear test, fall cone test and (hand) shear vane test; there are numerous other devices and variations on these devices used in practice today. Tests conducted to characterize the strength and stiffness of the soils in the ground include the Cone penetration test and the Standard penetration test.
Other factors
The stress–strain relationship of soils, and therefore the shearing strength, is affected by:
soil composition (basic soil material): mineralogy, grain size and grain size distribution, shape of particles, pore fluid type and content, ions on grain and in pore fluid.
state (initial): Defined by the initial void ratio, effective normal stress and shear stress (stress history). State can be describd by terms such as: loose, dense, overconsolidated, normally consolidated, stiff, soft, contractive, dilative, etc.
structure: Refers to the arrangement of particles within the soil mass; the manner in which the particles are packed or distributed. Features such as layers, joints, fissures, slickensides, voids, pockets, cementation, etc., are part of the structure. Structure of soils is described by terms such as: undisturbed, disturbed, remolded, compacted, cemented; flocculent, honey-combed, single-grained; flocculated, deflocculated; stratified, layered, laminated; isotropic and anisotropic.
Loading conditions'': Effective stress path - drained, undrained, and type of loading - magnitude, rate (static, dynamic), and time history (monotonic, cyclic).
Applications
Lateral earth pressure
Lateral earth stress theory is used to estimate the amount of stress soil can exert perpendicular to gravity. This is the stress exerted on retaining walls. A lateral earth stress coefficient, K, is defined as the ratio of lateral (horizontal) effective stress to vertical effective stress for cohesionless soils (K=σ'h/σ'v). There are three coefficients: at-rest, active, and passive. At-rest stress is the lateral stress in the ground before any disturbance takes place. The active stress state is reached when a wall moves away from the soil under the influence of lateral stress, and results from shear failure due to reduction of lateral stress. The passive stress state is reached when a wall is pushed into the soil far enough to cause shear failure within the mass due to increase of lateral stress. There are many theories for estimating lateral earth stress; some are empirically based, and some are analytically derived.
Bearing capacity
The bearing capacity of soil is the average contact stress between a foundation and the soil which will cause shear failure in the soil. Allowable bearing stress is the bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing stress is determined with regard to the maximum allowable settlement. It is important during construction and design stage of a project to evaluate the subgrade strength. The California Bearing Ratio (CBR) test is commonly used to determine the suitability of a soil as a subgrade for design and construction. The field Plate Load Test is commonly used to predict the deformations and failure characteristics of the soil/subgrade and modulus of subgrade reaction (ks). The Modulus of subgrade reaction (ks) is used in foundation design, soil-structure interaction studies and design of highway pavements.
Slope stability
The field of slope stability encompasses the analysis of static and dynamic stability of slopes of earth and rock-fill dams, slopes of other types of embankments, excavated slopes, and natural slopes in soil and soft rock.
As seen to the right, earthen slopes can develop a cut-spherical weakness zone. The probability of this happening can be calculated in advance using a simple 2-D circular analysis package. A primary difficulty with analysis is locating the most-probable slip plane for any given situation. Many landslides have been analyzed only after the fact. Landslides vs. Rock strength are two factors for consideration.
Recent developments
A recent finding in soil mechanics is that soil deformation can be described as the behavior of a dynamical system. This approach to soil mechanics is referred to as Dynamical Systems based Soil Mechanics (DSSM). DSSM holds simply that soil deformation is a Poisson process in which particles move to their final position at random shear strains.
The basis of DSSM is that soils (including sands) can be sheared till they reach a steady-state condition at which, under conditions of constant strain-rate, there is no change in shear stress, effective confining stress, and void ratio. The steady-state was formally defined by Steve J. Poulos an associate professor at the Soil Mechanics Department of Harvard University, who built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. The steady state condition is not the same as the "critical state" condition. It differs from the critical state in that it specifies a statistically constant structure at the steady state. The steady-state values are also very slightly dependent on the strain-rate.
Many systems in nature reach steady states, and dynamical systems theory describes such systems. Soil shear can also be described as a dynamical system. The physical basis of the soil shear dynamical system is a Poisson process in which particles move to the steady-state at random shear strains. Joseph generalized this—particles move to their final position (not just steady-state) at random shear-strains. Because of its origins in the steady state concept, DSSM is sometimes informally called "Harvard soil mechanics."
DSSM provides for very close fits to stress–strain curves, including for sands. Because it tracks conditions on the failure plane, it also provides close fits for the post failure region of sensitive clays and silts something that other theories are not able to do. Additionally DSSM explains key relationships in soil mechanics that to date have simply been taken for granted, for example, why normalized undrained peak shear strengths vary with the log of the overconsolidation ratio and why stress–strain curves normalize with the initial effective confining stress; and why in one-dimensional consolidation the void ratio must vary with the log of the effective vertical stress, why the end-of-primary curve is unique for static load increments, and why the ratio of the creep value Cα to the compression index Cc must be approximately constant for a wide range of soils.
See also
Critical state soil mechanics
Earthquake engineering
Engineering geology
Geotechnical centrifuge modeling
Geotechnical engineering
Geotechnical engineering (Offshore)
Geotechnics
Hydrogeology, aquifer characteristics closely related to soil characteristics
International Society for Soil Mechanics and Geotechnical Engineering
Rock mechanics
Slope stability analysis
References
External links | Soil mechanics | [
"Physics"
] | 9,190 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
1,277,505 | https://en.wikipedia.org/wiki/Steam%20explosion | A steam explosion is an explosion caused by violent boiling or flashing of water or ice into steam, occurring when water or ice is either superheated, rapidly heated by fine hot debris produced within it, or heated by the interaction of molten metals (as in a fuel–coolant interaction, or FCI, of molten nuclear-reactor fuel rods with water in a nuclear reactor core following a core-meltdown). Steam explosions are instances of explosive boiling. Pressure vessels, such as pressurized water (nuclear) reactors, that operate above atmospheric pressure can also provide the conditions for a steam explosion. The water changes from a solid or liquid to a gas with extreme speed, increasing dramatically in volume. A steam explosion sprays steam and boiling-hot water and the hot medium that heated it in all directions (if not otherwise confined, e.g. by the walls of a container), creating a danger of scalding and burning.
Steam explosions are not normally chemical explosions, although a number of substances react chemically with steam (for example, zirconium and superheated graphite (inpure carbon, C) react with steam and air respectively to give off hydrogen (H2), which may explode violently in air (O2) to form water or H2O) so that chemical explosions and fires may follow. Some steam explosions appear to be special kinds of boiling liquid expanding vapor explosion (BLEVE), and rely on the release of stored superheat. But many large-scale events, including foundry accidents, show evidence of an energy-release front propagating through the material (see description of FCI below), where the forces create fragments and mix the hot phase into the cold volatile one; and the rapid heat transfer at the front sustains the propagation.
Examples
High steam generation rates can occur under other circumstances, such as boiler-drum failure, or at a quench front (for example when water re-enters a hot dry boiler). Though potentially damaging, they are usually less energetic than events in which the hot ("fuel") phase is molten and so can be finely fragmented within the volatile ("coolant") phase. Some examples follow:
Natural
Steam explosions are naturally produced by certain volcanoes, especially stratovolcanoes, and are a major cause of human fatalities in volcanic eruptions. They are often encountered where hot lava meets sea water or ice. Such an occurrence is also called a littoral explosion. A dangerous steam explosion can also be created when liquid water or ice encounters hot, molten metal. As the water explodes into steam, it splashes the burning hot liquid metal along with it, causing an extreme risk of severe burns to anyone located nearby and creating a fire hazard.
Boiler explosions
When a pressurized container such as the waterside of a steam boiler ruptures, it is always followed by some degree of steam explosion. A common operating temperature and pressure for a marine boiler is around and at the outlet of the superheater. A steam boiler has an interface of steam and water in the steam drum, which is where the water is finally evaporating due to the heat input, usually oil-fired burners. When a water tube fails due to any of a variety of reasons, it causes the water in the boiler to expand out of the opening into the furnace area that is only a few psi above atmospheric pressure. This will likely extinguish all fires and expands over the large surface area on the sides of the boiler. To decrease the likelihood of a devastating explosion, boilers have gone from the "fire-tube" designs, where the heat was added by passing hot gases through tubes in a body of water, to "water-tube" boilers that have the water inside of the tubes and the furnace area is around the tubes. Old "fire-tube" boilers often failed due to poor build quality or lack of maintenance (such as corrosion of the fire tubes, or fatigue of the boiler shell due to constant expansion and contraction). A failure of fire tubes forces large volumes of high pressure, high temperature steam back down the fire tubes in a fraction of a second and often blows the burners off the front of the boiler, whereas a failure of the pressure vessel surrounding the water would lead to a full and entire evacuation of the boiler's contents in a large steam explosion. On a marine boiler, this would certainly destroy the ship's propulsion plant and possibly the corresponding end of the ship.
Tanks containing crude oil and certain commercial oil cuts, such as some diesel oils and kerosene, may be subject to boilover, an extremely hazardous situation in which a water layer under an open-top tank pool fire starts boiling, which results in a significant increase in fire intensity accompanied by violent expulsion of burning fluid to the surrounding areas. In many cases, the underlying water layer is superheated, in which case part of it goes through explosive boiling. When this happens, the abruptness of the expansion further enhances the expulsion of blazing fuel.
Nuclear reactor meltdown
Events of this general type are also possible if the fuel and fuel elements of a water-cooled nuclear reactor gradually melt. The mixture of molten core structures and fuel is often referred to as "Corium". If such corium comes into contact with water, vapour explosions may occur from the violent interaction between molten fuel (corium) and water as coolant. Such explosions are seen to be fuel–coolant interactions (FCI).
The severity of a steam explosion based on fuel-coolant interaction (FCI) depends strongly on the so-called premixing process, which describes the mixing of the melt with the surrounding water-steam mixture. In general, water-rich premixtures are considered more favorable than steam-rich environments in terms of steam explosion initiation and strength.
The theoretical maximum for the strength of a steam explosion from a given mass of molten corium, which can never be achieved in practice, is due to its optimal distribution in the form of molten corium droplets of a certain size. These droplets are surrounded by a suitable volume of water, which in principle results from the max. possible mass of vaporized water at instantaneous heat exchange between the molten droplet fragmenting in a shock wave and the surrounding water. On the basis of this very conservative assumption, calculations for alpha containment failure were carried out by Theofanous.
However, these optimal conditions used for conservative estimates do not occur in the real world. For one thing, the entire molten reactor core will never be in premixture, but only in the form of a part of it, e.g., as a jet of molten corium impinging a water pool in the lower plenum of the reactor, fragmenting there by ablation and allowing by this the formation of a premixture in the vicinity of the melt jet falling through the water pool. Alternatively, the melt may arrive as a thick jet at the bottom of the lower plenum, where it forms a pool of melt overlaid by a pool of water. In this case, a premixing zone can form at the interface between the melt pool and the water pool. In both cases, it is clear that by far not the entire molten reactor inventory is involved in premixing, but rather only a small percentage. Further limitations arise from the saturated nature of the water in the reactor, i.e., water with appreciable supercooling is not present there. In the case of penetration of a fragmenting melt jet there, this leads to increasing evaporation and an increasing steam content in the premixture, which, from a content > 70% in the water/steam mixture, prevents the explosion altogether or at least limits its strength. Another counter-effect is the solidification of the molten particles, which depends, among other things, on the diameter of the molten particles. That is, small particles solidify faster than larger ones. Furthermore, the models for instability growth at interfaces between flowing media (e.g. Kelvin-Helmholtz, Rayleigh-Taylor, Conte-Miles, ...) show a correlation between particle size after fragmentation and the ratio of the density of the fragmenting medium (water-vapor mixture) to the density of the fragmented medium, which can also be demonstrated experimentally. In the case of corium (density of ~ 8000 kg/m³), much smaller droplets (~ 3 - 4 mm) result than when alumina (Al2O3) is used as a corium simulant with a density of just under half that of corium with droplet sizes in the range of 1 - 2 cm. Jet fragmentation experiments conducted at JRC ISPRA under typical reactor conditions with masses of molten corium up to 200 kg and melt jet diameters of 5 - 10 cm in diameter in pools of saturated water up to 2 m deep resulted in success with respect to steam explosions only when Al2O3 was used as the corium simulant. Despite various efforts on the part of the experimenters, it was never possible to trigger a steam explosion in the corium experiments in FARO.(Will be continued ...)
If a steam explosion occurs in a confined tank of water due to rapid heating of the water, the pressure wave and rapidly expanding steam can cause severe water hammer. This was the mechanism that, in Idaho, USA, in 1961, caused the SL-1 nuclear reactor vessel to jump over in the air when it was destroyed by a criticality accident. In the case of SL-1, the fuel and fuel elements vaporized from instantaneous overheating.
In January 1961, operator error caused the SL-1 reactor to instantly destroy itself in a steam explosion. The 1986 Chernobyl nuclear disaster in the Soviet Union was feared to cause major steam explosion (and resulting Europe-wide nuclear fallout) upon melting the lava-like nuclear fuel through the reactor's basement towards contact with residue fire-fighting water and groundwater. The threat was averted by frantic tunneling underneath the reactor in order to pump out water and reinforce underlying soil with concrete.
In a nuclear meltdown, the most severe outcome of a steam explosion is early containment building failure. Two possibilities are the ejection at high pressure of molten fuel into the containment, causing rapid heating; or an in-vessel steam explosion causing ejection of a missile (such as the upper head) into, and through, the containment. Less dramatic but still significant is that the molten mass of fuel and reactor core melts through the floor of the reactor building and reaches ground water; a steam explosion might occur, but the debris would probably be contained, and would in fact, being dispersed, probably be more easily cooled. See WASH-1400 for details.
Further examples
Molten aluminium produces a strong exothermic reaction with water, which is observed in some building fires.
In a more domestic setting, steam explosions can be a result of trying to extinguish burning oil with water, in a process called slopover. When oil in a pan is on fire, the natural impulse may be to extinguish it with water; however, doing so will cause the hot oil to superheat the water. The resulting steam will disperse upwards and outwards rapidly and violently in a spray also containing the ignited oil. The correct method to extinguish such fires is to use either a damp cloth or a tight lid on the pan; both methods deprive the fire of oxygen, and the cloth also cools it down. Alternatively, a non-volatile purpose designed fire retardant agent or simply a fire blanket can be used.
Practical uses
Biomass Refinement
Steam explosive biorefinement is an industrial application to valorize biomass. It involves pressurizing biomass with steam at up to 3 MPa (30 atmospheres) and instantaneously releasing the pressure to produce the desired transformation in the biomass. An industrial application of the concept has been shown for a paper fiber project.
Steam turbines
A water vapor explosion creates a high volume of gas without producing environmentally harmful leftovers. The controlled explosion of water has been used for generating steam in power stations and in modern types of steam turbines. Newer steam engines use heated oil to force drops of water to explode and create high pressure in a controlled chamber. The pressure is then used to run a turbine or a converted combustion engine. Hot oil and water explosions are becoming particularly popular in concentrated solar generators, because the water can be separated from the oil in a closed loop without any external energy. Water explosion is considered to be environmentally friendly if the heat is generated by a renewable resource.
Flash boiling in cooking
A cooking technique called flash boiling uses a small amount of water to quicken the process of boiling. For example, this technique can be used to melt a slice of cheese onto a hamburger patty. The cheese slice is placed on top of the meat on a hot surface such as a frying pan, and a small quantity of cold water is thrown onto the surface near the patty. A vessel (such as a pot or frying-pan cover) is then used to quickly seal the steam-flash reaction, dispersing much of the steamed water on the cheese and patty. This results in a large release of heat, transferred via vaporized water condensing back into a liquid (a principle also used in refrigerator and freezer production).
Other uses
Internal combustion engines may use flash-boiling to aerosolize the fuel.
See also
BLEVE
Boiler explosion
Explosive boiling
Multiphase flow
2007 New York City steam explosion
Chernobyl disaster
Bibliography
Triggered Steam Explosions by Lloyd S. Nelson, Paul W. Brooks, Riccardo Bonazza and Michael L. Corradini ... Kjetil Hildal
References
Explosion protection
Nuclear accidents and incidents
Water in gas
Explosions
Process safety
ja:水蒸気爆発 | Steam explosion | [
"Chemistry",
"Engineering"
] | 2,831 | [
"Explosion protection",
"Nuclear accidents and incidents",
"Safety engineering",
"Combustion engineering",
"Process safety",
"Explosions",
"Chemical process engineering",
"Radioactivity"
] |
1,277,825 | https://en.wikipedia.org/wiki/Angular%20momentum%20coupling | In quantum mechanics, angular momentum coupling is the procedure of constructing eigenstates of total angular momentum out of eigenstates of separate angular momenta. For instance, the orbit and spin of a single particle can interact through spin–orbit interaction, in which case the complete physical picture must include spin–orbit coupling. Or two charged particles, each with a well-defined angular momentum, may interact by Coulomb forces, in which case coupling of the two one-particle angular momenta to a total angular momentum is a useful step in the solution of the two-particle Schrödinger equation.
In both cases the separate angular momenta are no longer constants of motion, but the sum of the two angular momenta usually still is. Angular momentum coupling in atoms is of importance in atomic spectroscopy. Angular momentum coupling of electron spins is of importance in quantum chemistry. Also in the nuclear shell model angular momentum coupling is ubiquitous.
In astronomy, spin–orbit coupling reflects the general law of conservation of angular momentum, which holds for celestial systems as well. In simple cases, the direction of the angular momentum vector is neglected, and the spin–orbit coupling is the ratio between the frequency with which a planet or other celestial body spins about its own axis to that with which it orbits another body. This is more commonly known as orbital resonance. Often, the underlying physical effects are tidal forces.
General theory and detailed origin
Angular momentum conservation
Conservation of angular momentum is the principle that the total angular momentum of a system has a constant magnitude and direction if the system is subjected to no external torque. Angular momentum is a property of a physical system that is a constant of motion (also referred to as a conserved property, time-independent and well-defined) in two situations:
The system experiences a spherically symmetric potential field.
The system moves (in quantum mechanical sense) in isotropic space.
In both cases the angular momentum operator commutes with the Hamiltonian of the system. By Heisenberg's uncertainty relation this means that the angular momentum and the energy (eigenvalue of the Hamiltonian) can be measured at the same time.
An example of the first situation is an atom whose electrons only experience the Coulomb force of its atomic nucleus. If we ignore the electron–electron interaction (and other small interactions such as spin–orbit coupling), the orbital angular momentum of each electron commutes with the total Hamiltonian. In this model the atomic Hamiltonian is a sum of kinetic energies of the electrons and the spherically symmetric electron–nucleus interactions. The individual electron angular momenta commute with this Hamiltonian. That is, they are conserved properties of this approximate model of the atom.
An example of the second situation is a rigid rotor moving in field-free space. A rigid rotor has a well-defined, time-independent, angular momentum.
These two situations originate in classical mechanics. The third kind of conserved angular momentum, associated with spin, does not have a classical counterpart. However, all rules of angular momentum coupling apply to spin as well.
In general the conservation of angular momentum implies full rotational symmetry
(described by the groups SO(3) and SU(2)) and, conversely, spherical symmetry implies conservation of angular momentum. If two or more physical systems have conserved angular momenta, it can be useful to combine these momenta to a total angular momentum of the combined system—a conserved property of the total system.
The building of eigenstates of the total conserved angular momentum from the angular momentum eigenstates of the individual subsystems is referred to as angular momentum coupling.
Application of angular momentum coupling is useful when there is an interaction between subsystems that, without interaction, would have conserved angular momentum. By the very interaction the spherical symmetry of the subsystems is broken, but the angular momentum of the total system remains a constant of motion. Use of the latter fact is helpful in the solution of the Schrödinger equation.
Examples
As an example we consider two electrons, in an atom (say the helium atom) labeled with = 1 and 2. If there is no electron–electron interaction, but only electron–nucleus interaction, then the two electrons can be rotated around the nucleus independently of each other; nothing happens to their energy. The expectation values of both operators, 1 and 2, are conserved.
However, if we switch on the electron–electron interaction that depends on the distance (1,2) between the electrons, then only a simultaneous
and equal rotation of the two electrons will leave (1,2) invariant. In such a case the expectation value of neither
1 nor 2 is a constant of motion in general, but the expectation value of the total orbital angular momentum operator = 1 + 2
is. Given the eigenstates of 1 and 2, the construction of eigenstates of (which still is conserved) is the coupling of the angular momenta of electrons 1 and 2.
The total orbital angular momentum quantum number is restricted to integer values and must satisfy the triangular condition that , such that the three nonnegative integer values could correspond to the three sides of a triangle.
In quantum mechanics, coupling also exists between angular momenta belonging to different Hilbert spaces of a single object, e.g. its spin and its orbital angular momentum. If the spin has half-integer values, such as for an electron, then the total (orbital plus spin) angular momentum will also be restricted to half-integer values.
Reiterating slightly differently the above: one expands the quantum states of composed systems (i.e. made of subunits like two hydrogen atoms or two electrons) in basis sets which are made of tensor products of quantum states which in turn describe the subsystems individually. We assume that the states of the subsystems can be chosen as eigenstates of their angular momentum operators (and of their component along any arbitrary axis).
The subsystems are therefore correctly described by a pair of , quantum numbers (see angular momentum for details). When there is interaction among the subsystems, the total Hamiltonian contains terms that do not commute with the angular operators acting on the subsystems only. However, these terms do commute with the total angular momentum operator. Sometimes one refers to the non-commuting interaction terms in the Hamiltonian as angular momentum coupling terms, because they necessitate the angular momentum coupling.
Spin–orbit coupling
The behavior of atoms and smaller particles is well described by the theory of quantum mechanics, in which each particle has an intrinsic angular momentum called spin and specific configurations (of e.g. electrons in an atom) are described by a set of quantum numbers. Collections of particles also have angular momenta and corresponding quantum numbers, and under different circumstances the angular momenta of the parts couple in different ways to form the angular momentum of the whole. Angular momentum coupling is a category including some of the ways that subatomic particles can interact with each other.
In atomic physics, spin–orbit coupling, also known as spin-pairing, describes a weak magnetic interaction, or coupling, of the particle spin and the orbital motion of this particle, e.g. the electron spin and its motion around an atomic nucleus. One of its effects is to separate the energy of internal states of the atom, e.g. spin-aligned and spin-antialigned that would otherwise be identical in energy. This interaction is responsible for many of the details of atomic structure.
In solid-state physics, the spin coupling with the orbital motion can lead to splitting of energy bands due to Dresselhaus or Rashba effects.
In the macroscopic world of orbital mechanics, the term spin–orbit coupling is sometimes used in the same sense as spin–orbit resonance.
LS coupling
In light atoms (generally Z ≤ 30), electron spins si interact among themselves so they combine to form a total spin angular momentum S. The same happens with orbital angular momenta ℓi, forming a total orbital angular momentum L. The interaction between the quantum numbers L and S is called Russell–Saunders coupling (after Henry Norris Russell and Frederick Saunders) or LS coupling. Then S and L couple together and form a total angular momentum J:
where L and S are the totals:
This is an approximation which is good as long as any external magnetic fields are weak. In larger magnetic fields, these two momenta decouple, giving rise to a different splitting pattern in the energy levels (the Paschen–Back effect), and the size of LS coupling term becomes small.
For an extensive example on how LS-coupling is practically applied, see the article on term symbols.
jj coupling
In heavier atoms the situation is different. In atoms with bigger nuclear charges, spin–orbit interactions are frequently as large as or larger than spin–spin interactions or orbit–orbit interactions. In this situation, each orbital angular momentum ℓi tends to combine with the corresponding individual spin angular momentum si, originating an individual total angular momentum ji. These then couple up to form the total angular momentum J
This description, facilitating calculation of this kind of interaction, is known as jj coupling.
Spin–spin coupling
Spin–spin coupling is the coupling of the intrinsic angular momentum (spin) of different particles.
J-coupling between pairs of nuclear spins is an important feature of nuclear magnetic resonance (NMR) spectroscopy as it can
provide detailed information about the structure and conformation of molecules. Spin–spin coupling between nuclear spin and electronic spin is responsible for hyperfine structure in atomic spectra.
Term symbols
Term symbols are used to represent the states and spectral transitions of atoms, they are found from coupling of angular momenta mentioned above. When the state of an atom has been specified with a term symbol, the allowed transitions can be found through selection rules by considering which transitions would conserve angular momentum. A photon has spin 1, and when there is a transition with emission or absorption of a photon the atom will need to change state to conserve angular momentum. The term symbol selection rules are: = 0; = 0, ±1; = ± 1; = 0, ±1 .
The expression "term symbol" is derived from the "term series" associated with the Rydberg states of an atom and their energy levels. In the Rydberg formula the frequency or wave number of the light emitted by a hydrogen-like atom is proportional to the difference between the two terms of a transition. The series known to early spectroscopy were designated sharp, principal, diffuse, and fundamental and consequently the letters and were used to represent the orbital angular momentum states of an atom.
Relativistic effects
In very heavy atoms, relativistic shifting of the energies of the electron energy levels accentuates spin–orbit coupling effect. Thus, for example, uranium molecular orbital diagrams must directly incorporate relativistic symbols when considering interactions with other atoms.
Nuclear coupling
In atomic nuclei, the spin–orbit interaction is much stronger than for atomic electrons, and is incorporated directly into the nuclear shell model. In addition, unlike atomic–electron term symbols, the lowest energy state is not , but rather, . All nuclear levels whose value (orbital angular momentum) is greater than zero are thus split in the shell model to create states designated by and . Due to the nature of the shell model, which assumes an average potential rather than a central Coulombic potential, the nucleons that go into the and nuclear states are considered degenerate within each orbital (e.g. The 2 contains four nucleons, all of the same energy. Higher in energy is the 2 which contains two equal-energy nucleons).
See also
Clebsch–Gordan coefficients
Angular momentum diagrams (quantum mechanics)
Spherical basis
Notes
External links
LS and jj coupling
Term symbol
Web calculator of spin couplings: shell model, atomic term symbol
Angular momentum
Atomic physics
Rotational symmetry
ar:ترابط مغزلي مداري
it:Interazione spin-orbita | Angular momentum coupling | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,430 | [
"Symmetry",
"Physical quantities",
"Quantity",
"Quantum mechanics",
"Rotational symmetry",
" molecular",
"Atomic physics",
"Atomic",
"Angular momentum",
"Moment (physics)",
"Momentum",
" and optical physics"
] |
1,277,893 | https://en.wikipedia.org/wiki/Thermogravimetric%20analysis | Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions (e.g., oxidation or reduction).
Thermogravimetric analyzer
Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements.
A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure.
The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis.
A TGA can be used for materials characterization through analysis of characteristic decomposition patterns. It is an especially useful technique for the study of polymeric materials, including thermoplastics, thermosets, elastomers, composites, plastic films, fibers, coatings, paints, and fuels.
Types of TGA
There are three types of thermogravimetry:
Isothermal or static thermogravimetry: In this technique, the sample weight is recorded as a function of time at a constant temperature.
Quasistatic thermogravimetry: In this technique, the sample temperature is raised in sequential steps separated by isothermal intervals, during which the sample mass reaches stability before the start of the next temperature ramp.
Dynamic thermogravimetry: In this technique, the sample is heated in an environment whose temperature is changed in a linear manner.
Applications
Thermal stability
TGA can be used to evaluate the thermal stability of a material. In a desired temperature range, if a species is thermally stable, there will be no observed mass change. Negligible mass loss corresponds to little or no slope in the TGA trace. TGA also gives the upper use temperature of a material. Beyond this temperature the material will begin to degrade.
TGA is used in the analysis of polymers. Polymers usually melt before they decompose, thus TGA is mainly used to investigate the thermal stability of polymers. Most polymers melt or degrade before 200 °C. However, there is a class of thermally stable polymers that are able to withstand temperatures of at least 300 °C in air and 500 °C in inert gases without structural changes or strength loss, which can be analyzed by TGA.
Oxidation and combustion
The simplest materials characterization is the residue remaining after a reaction. For example, a combustion reaction could be tested by loading a sample into a thermogravimetric analyzer at normal conditions. The thermogravimetric analyzer would cause ion combustion in the sample by heating it beyond its ignition temperature. The resultant TGA curve plotted with the y-axis as a percentage of initial mass would show the residue at the final point of the curve.
Oxidative mass losses are the most common observable losses in TGA.
Studying the resistance to oxidation in copper alloys is very important. For example, NASA (National Aeronautics and Space Administration) is conducting research on advanced copper alloys for their possible use in combustion engines. However, oxidative degradation can occur in these alloys as copper oxides form in atmospheres that are rich in oxygen. Resistance to oxidation is significant because NASA wants to be able to reuse shuttle materials. TGA can be used to study the static oxidation of materials such as these for practical use.
Combustion during TG analysis is identifiable by distinct traces made in the TGA thermograms produced. One interesting example occurs with samples of as-produced unpurified carbon nanotubes that have a large amount of metal catalyst present. Due to combustion, a TGA trace can deviate from the normal form of a well-behaved function. This phenomenon arises from a rapid temperature change. When the weight and temperature are plotted versus time, a dramatic slope change in the first derivative plot is concurrent with the mass loss of the sample and the sudden increase in temperature seen by the thermocouple. The mass loss could result from particles of smoke released from burning caused by inconsistencies in the material itself, beyond the oxidation of carbon due to poorly controlled weight loss.
Different weight losses on the same sample at different points can also be used as a diagnosis of the sample's anisotropy. For instance, sampling the top side and the bottom side of a sample with dispersed particles inside can be useful to detect sedimentation, as thermograms will not overlap but will show a gap between them if the particle distribution is different from side to side.
Thermogravimetric kinetics
Thermogravimetric kinetics may be explored for insight into the reaction mechanisms of thermal (catalytic or non-catalytic) decomposition involved in the pyrolysis and combustion processes of different materials.
Activation energies of the decomposition process can be calculated using Kissinger method.
Though a constant heating rate is more common, a constant mass loss rate can illuminate specific reaction kinetics. For example, the kinetic parameters of the carbonization of polyvinyl butyral were found using a constant mass loss rate of 0.2 wt %/min.
Operation in combination with other instruments
Thermogravimetric analysis is often combined with other processes or used in conjunction with other analytical methods.
For example, the TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C for coupling with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry gas analysis. As the temperature increases, various components of the sample are decomposed and the weight percentage of each resulting mass change can be measured.
References
Thermodynamics
Materials science
Analytical chemistry | Thermogravimetric analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,441 | [
"Applied and interdisciplinary physics",
"Materials science",
"Thermodynamics",
"nan",
"Dynamical systems"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.