id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
8,378 | https://en.wikipedia.org/wiki/Dipole | In physics, a dipole () is an electromagnetic phenomenon which occurs in two ways:
An electric dipole deals with the separation of the positive and negative electric charges found in any electromagnetic system. A simple example of this system is a pair of charges of equal magnitude but opposite sign separated by some typically small distance. (A permanent electric dipole is called an electret.)
A magnetic dipole is the closed circulation of an electric current system. A simple example is a single loop of wire with constant current through it. A bar magnet is an example of a magnet with a permanent magnetic dipole moment.
Dipoles, whether electric or magnetic, can be characterized by their dipole moment, a vector quantity. For the simple electric dipole, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the "dipole limit", where, for example, the distance of the generating charges should converge to 0 while simultaneously, the charge strength should diverge to infinity in such a way that the product remains a positive constant.)
For the magnetic (dipole) current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop.
Similar to magnetic current loops, the electron particle and some other fundamental particles have magnetic dipole moments, as an electron generates a magnetic field identical to that generated by a very small current loop. However, an electron's magnetic dipole moment is not due to a current loop, but to an intrinsic property of the electron. The electron may also have an electric dipole moment though such has yet to be observed (see electron electric dipole moment).
A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles, see Classification below) and may be labeled "north" and "south". In terms of the Earth's magnetic field, they are respectively "north-seeking" and "south-seeking" poles: if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point towards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. In a magnetic compass, the north pole of a bar magnet points north. However, that means that Earth's geomagnetic north pole is the south pole (south-seeking pole) of its dipole moment and vice versa.
The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated.
Classification
A physical dipole consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field.
Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop.
Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge ("monopole moment") is 0—as it always is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to , as compared to for the next (quadrupole) term and higher powers of for higher terms, or for the monopole term.
Molecular dipoles
Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field that should not be confused with a magnetic dipole, which generates a magnetic field.
The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in the non-SI unit named debye in his honor.
For molecules there are three types of dipoles:
Permanent dipoles These occur when two atoms in a molecule have substantially different electronegativity : One atom attracts electrons more than another, becoming more negative, while the other atom becomes more positive. A molecule with a permanent dipole moment is called a polar molecule. See dipole–dipole attractions.
Instantaneous dipoles These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. These dipoles are smaller in magnitude than permanent dipoles, but still play a large role in chemistry and biochemistry due to their prevalence. See instantaneous dipole.
Induced dipoles These can occur when one molecule with a permanent dipole repels another molecule's electrons, inducing a dipole moment in that molecule. A molecule is polarized when it carries an induced dipole. See induced-dipole attraction.
More generally, an induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole moment is equal to the product of the strength of the external field and the dipole polarizability of ρ.
Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values given with the unit debye are:
carbon dioxide: 0
carbon monoxide: 0.112 D
ozone: 0.53 D
phosgene: 1.17 D
ammonia: 1.42 D
water vapor: 1.85 D
hydrogen cyanide: 2.98 D
cyanamide: 4.27 D
potassium bromide: 10.41 D
Potassium bromide (KBr) has one of the highest dipole moments because it is an ionic compound that exists as a molecule in the gas phase.
The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry.
For example, the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O−H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O−O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom.
An example in organic chemistry of the role of geometry in determining dipole moment is the cis and trans isomers of 1,2-dichloroethene. In the cis isomer the two polar C−Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the trans isomer, the dipole moment is zero because the two C−Cl bonds are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C−H bonds also cancel).
Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions centered on and in the same plane as the boron cation, the symmetry of the molecule results in its dipole moment being zero.
Quantum-mechanical dipole operator
Consider a collection of N particles with charges qi and position vectors ri. For instance, this collection may be a molecule consisting of electrons, all with charge −e, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus.
The dipole observable (physical quantity) has the quantum mechanical dipole operator:
Notice that this definition is valid only for neutral atoms or molecules, i.e. total charge equal to zero. In the ionized case, we have
where is the center of mass of the molecule/group of particles.
Atomic dipoles
A non-degenerate (S-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus,
where is the dipole operator and is the inversion operator.
The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator,
where is an S-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion: . Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse,
it follows that the expectation value changes sign under inversion. We used here the fact that , being a symmetry operator, is unitary: and by definition the Hermitian adjoint may be moved from bra to ket and then becomes . Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes,
In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) only if some of the wavefunctions belonging to the degenerate energies have opposite parity; i.e., have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see article Laplace–Runge–Lenz vector for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd).
Field of a static magnetic dipole
Magnitude
The far-field strength, B, of a dipole magnetic field is given by
where
B is the strength of the field, measured in teslas
r is the distance from the center, measured in metres
λ is the magnetic latitude (equal to 90° − θ) where θ is the magnetic colatitude, measured in radians or degrees from the dipole axis
m is the dipole moment, measured in ampere-square metres or joules per tesla
μ0 is the permeability of free space, measured in henries per metre.
Conversion to cylindrical coordinates is achieved using and
where ρ is the perpendicular distance from the z-axis. Then,
Vector form
The field itself is a vector quantity:
where
B is the field
r is the vector from the position of the dipole to the position where the field is being measured
r is the absolute value of r: the distance from the dipole
r̂ = is the unit vector parallel to r;
m is the (vector) dipole moment
μ0 is the permeability of free space
This is exactly the field of a point dipole, exactly the dipole term in the multipole expansion of an arbitrary field, and approximately the field of any dipole-like configuration at large distances.
Magnetic vector potential
The vector potential A of a magnetic dipole is
with the same definitions as above.
Field from an electric dipole
The electrostatic potential at position r due to an electric dipole at the origin is given by:
where p is the (vector) dipole moment, and є0 is the permittivity of free space.
This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential:
This is of the same form of the expression for the magnetic field of a point magnetic dipole, ignoring the delta function.
In a real electric dipole, however, the charges are physically separate and the electric field diverges or converges at the point charges.
This is different to the magnetic field of a real magnetic dipole which is continuous everywhere. The delta function represents the strong field pointing in the opposite direction between the point charges, which is often omitted since one is rarely interested in the field at the dipole's position.
For further discussions about the internal field of dipoles, see or .
Torque on a dipole
Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge.
When placed in a homogeneous electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque }:
for an electric dipole moment p (in coulomb-meters), or
for a magnetic dipole moment m (in ampere-square meters).
The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of
.
The energy of a magnetic dipole is similarly
.
Dipole radiation
In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation.
In particular, consider a harmonically oscillating electric dipole, with angular frequency ω and a dipole moment p0 along the ẑ direction of the form
In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as:
For ≫ 1, the far-field takes the simpler form of a radiating "spherical" wave, but with angular dependence embedded in the cross-product:
The time-averaged Poynting vector
is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function (sin θ) responsible for such toroidal angular distribution is precisely the l = 1 "p" wave.
The total time-average power radiated by the field can then be derived from the Poynting vector as
Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour.
A circular polarized dipole is described as a superposition of two linear dipoles.
See also
Polarization density
Magnetic dipole models
Dipole model of the Earth's magnetic field
Electret
Indian Ocean Dipole and Subtropical Indian Ocean Dipole, two oceanographic phenomena
Magnetic dipole–dipole interaction
Spin magnetic moment
Monopole
Solid harmonics
Axial multipole moments
Cylindrical multipole moments
Spherical multipole moments
Laplace expansion
Molecular solid
Magnetic moment#Internal magnetic field of a dipole
Notes
References
External links
USGS Geomagnetism Program
Fields of Force : a chapter from an online textbook
Electric Dipole Potential by Stephen Wolfram and Energy Density of a Magnetic Dipole by Franz Krafft. Wolfram Demonstrations Project.
Electromagnetism
Potential theory | Dipole | [
"Physics",
"Mathematics"
] | 3,506 | [
"Electromagnetism",
"Physical phenomena",
"Functions and mappings",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Fundamental interactions"
] |
8,398 | https://en.wikipedia.org/wiki/Dimension | In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on itfor example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on itfor example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces.
In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space.
The concept of dimension is not restricted to physical objects. s frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space.
In mathematics
In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space.
The dimension of Euclidean -space is . When trying to generalize to other types of spaces, one is faced with the question "what makes -dimensional?" One answer is that to cover a fixed ball in by small balls of radius , one needs on the order of such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in looks locally like and this leads to the notion of the inductive dimension. While these notions agree on , they turn out to be different when one looks at more general spaces.
A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4" or: 4D.
Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry.
The rest of this section examines some of the more important mathematical definitions of dimension.
Vector spaces
The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension.
For the non-free case, this generalizes to the notion of the length of a module.
Manifolds
The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean -space, in which the number is the manifold's dimension.
For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point.
In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases are simplified by having extra space in which to "work"; and the cases and are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied.
Complex dimension
The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension.
Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension.
Varieties
The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety.
An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains of sub-varieties of the given algebraic set (the length of such a chain is the number of "").
Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n.
Krull dimension
The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety.
For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0.
Topological spaces
For any normal topological space , the Lebesgue covering dimension of is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than elements. In this case dim . For a manifold, this coincides with the dimension mentioned above. If no such integer exists, then the dimension of is said to be infinite, and one writes dim . Moreover, has dimension −1, i.e. dim if and only if is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open".
An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an ()-dimensional object by dragging an -dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.
Similarly, for the class of CW complexes, the dimension of an object is the largest for which the -skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
Hausdorff dimension
The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values.
Hilbert spaces
Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide.
In physics
Spatial dimensions
Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.)
Time
A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction.
The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy).
The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time.
Additional dimensions
In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" at such tiny scales as to be effectively invisible to current experiments.
In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building.
In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume.
Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration.
Extra dimensions are said to be universal if all fields are equally free to propagate within them.
In computer graphics and spatial data
Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions:
Point (0-dimensional), a single coordinate in a Cartesian coordinate system.
Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments.
Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior.
Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior.
Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines).
More dimensions
List of topics by dimension
See also
References
Further reading
Google preview
External links
Physical quantities
Abstract algebra
Geometric measurement
Mathematical concepts | Dimension | [
"Physics",
"Mathematics"
] | 3,921 | [
"Geometric measurement",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Physical properties",
"Geometry",
"Theory of relativity",
"nan",
"Abstract algebra",
"Dimension",
"Algebra"
] |
8,400 | https://en.wikipedia.org/wiki/Duodecimal | The duodecimal system, also known as base twelve or dozenal, is a positional numeral system using twelve as its base. In duodecimal, the number twelve is denoted "10", meaning 1 twelve and 0 units; in the decimal system, this number is instead written as "12" meaning 1 ten and 2 units, and the string "10" means ten. In duodecimal, "100" means twelve squared, "1000" means twelve cubed, and "0.1" means a twelfth.
Various symbols have been used to stand for ten and eleven in duodecimal notation; this page uses and , as in hexadecimal, which make a duodecimal count from zero to twelve read 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, , , 10. The Dozenal Societies of America and Great Britain (organisations promoting the use of duodecimal) use turned digits in their published material: 2 (a turned 2) for ten and 3 (a turned 3) for eleven.
The number twelve, a superior highly composite number, is the smallest number with four non-trivial factors (2, 3, 4, 6), and the smallest to include as factors all four numbers (1 to 4) within the subitizing range, and the smallest abundant number. All multiples of reciprocals of 3-smooth numbers ( where are integers) have a terminating representation in duodecimal. In particular, (0.3), (0.4), (0.6), (0.8), and (0.9) all have a short terminating representation in duodecimal. There is also higher regularity observable in the duodecimal multiplication table. As a result, duodecimal has been described as the optimal number system.
In these respects, duodecimal is considered superior to decimal, which has only 2 and 5 as factors, and other proposed bases like octal or hexadecimal. Sexagesimal (base sixty) does even better in this respect (the reciprocals of all 5-smooth numbers terminate), but at the cost of unwieldy multiplication tables and a much larger number of symbols to memorize.
Origin
In this section, numerals are in decimal. For example, "10" means 9+1, and "12" means 9+3.
Georges Ifrah speculatively traced the origin of the duodecimal system to a system of finger counting based on the knuckle bones of the four larger fingers. Using the thumb as a pointer, it is possible to count to 12 by touching each finger bone, starting with the farthest bone on the fifth finger, and counting on. In this system, one hand counts repeatedly to 12, while the other displays the number of iterations, until five dozens, i.e. the 60, are full. This system is still in use in many regions of Asia.
Languages using duodecimal number systems are uncommon. Languages in the Nigerian Middle Belt such as Janji, Gbiri-Niragu (Gure-Kahugu), Piti, and the Nimbia dialect of Gwandara; and the Chepang language of Nepal are known to use duodecimal numerals.
Germanic languages have special words for 11 and 12, such as eleven and twelve in English. They come from Proto-Germanic *ainlif and *twalif (meaning, respectively, one left and two left), suggesting a decimal rather than duodecimal origin. However, Old Norse used a hybrid decimal–duodecimal counting system, with its words for "one hundred and eighty" meaning 200 and "two hundred" meaning 240. In the British Isles, this style of counting survived well into the Middle Ages as the long hundred.
Historically, units of time in many civilizations are duodecimal. There are twelve signs of the zodiac, twelve months in a year, and the Babylonians had twelve hours in a day (although at some point, this was changed to 24). Traditional Chinese calendars, clocks, and compasses are based on the twelve Earthly Branches or 24 (12×2) Solar terms. There are 12 inches in an imperial foot, 12 troy ounces in a troy pound, 12 old British pence in a shilling, 24 (12×2) hours in a day; many other items are counted by the dozen, gross (144, square of 12), or great gross (1728, cube of 12). The Romans used a fraction system based on 12, including the uncia, which became both the English words ounce and inch. Pre-decimalisation, Ireland and the United Kingdom used a mixed duodecimal-vigesimal currency system (12 pence = 1 shilling, 20 shillings or 240 pence to the pound sterling or Irish pound), and Charlemagne established a monetary system that also had a mixed base of twelve and twenty, the remnants of which persist in many places.
Notations and pronunciations
In a positional numeral system of base n (twelve for duodecimal), each of the first n natural numbers is given a distinct numeral symbol, and then n is denoted "10", meaning 1 times n plus 0 units. For duodecimal, the standard numeral symbols for 0–9 are typically preserved for zero through nine, but there are numerous proposals for how to write the numerals representing "ten" and "eleven". More radical proposals do not use any Arabic numerals under the principle of "separate identity."
Pronunciation of duodecimal numbers also has no standard, but various systems have been proposed.
Transdecimal symbols
Several authors have proposed using letters of the alphabet for the transdecimal symbols. Latin letters such as (as in hexadecimal) or (initials of Ten and Eleven) are convenient because they are widely accessible, and for instance can be typed on typewriters. However, when mixed with ordinary prose, they might be confused for letters. As an alternative, Greek letters such as could be used instead. Frank Emerson Andrews, an early American advocate for duodecimal, suggested and used in his 1935 book New Numbers (italic capital X from the Roman numeral for ten and a rounded italic capital E similar to open E), along with italic numerals –.
Edna Kramer in her 1951 book The Main Stream of Mathematics used a (sextile or six-pointed asterisk, hash or octothorpe). The symbols were chosen because they were available on some typewriters; they are also on push-button telephones. This notation was used in publications of the Dozenal Society of America (DSA) from 1974 to 2008.
From 2008 to 2015, the DSA used , the symbols devised by William Addison Dwiggins.
The Dozenal Society of Great Britain (DSGB) proposed symbols . This notation, derived from Arabic digits by 180° rotation, was introduced by Isaac Pitman in 1857. In March 2013, a proposal was submitted to include the digit forms for ten and eleven propagated by the Dozenal Societies in the Unicode Standard. Of these, the British/Pitman forms were accepted for encoding as characters at code points and . They were included in Unicode 8.0 (2015).
After the Pitman digits were added to Unicode, the DSA took a vote and then began publishing PDF content using the Pitman digits instead, but continues to use the letters X and E on its webpage.
Base notation
There are also varying proposals of how to distinguish a duodecimal number from a decimal one. The most common method used in mainstream mathematics sources comparing various number bases uses a subscript "10" or "12", e.g. "5412 = 6410". To avoid ambiguity about the meaning of the subscript 10, the subscripts might be spelled out, "54twelve = 64ten". In 2015 the Dozenal Society of America adopted the more compact single-letter abbreviation "z" for "dozenal" and "d" for "decimal", "54z = 64d".
Other proposed methods include italicizing duodecimal numbers "54 = 64", adding a "Humphrey point" (a semicolon instead of a decimal point) to duodecimal numbers "54;6 = 64.5", prefixing duodecimal numbers by an asterisk "*54 = 64", or some combination of these. The Dozenal Society of Great Britain uses an asterisk prefix for duodecimal whole numbers, and a Humphrey point for other duodecimal numbers.
Pronunciation
The Dozenal Society of America suggested the pronunciation of ten and eleven as "dek" and "el". For the names of powers of twelve, there are two prominent systems. In spite of the efficiency of these newer systems, terms for powers of twelve either already exist or remain easily reconstructed in English using words and affixes.
Base-12 nomenclature in English
Another nominal for twelve (1210) is a dozen (1012 or 1•10112).
One hundred and forty-four (14410) is also known as a gross (10012 or 1•10212).
One thousand, seven hundred and twenty-eight is (172810) also known as a great-gross (1,00012 or 1•10312).
For the next powers of twelve that follow those aforementioned, the affixes (dozen-, gross-, great-) are used to produce names for these powers of twelve that have a greater positional-notation value. 20,73610 or 10,00012 may be rendered a dozen-great-gross; so 248,83210 or 100,00012 is a gross-great-gross, with 2,985,98410 or 1,000,00012 being known as a great-great-gross.
It should be made plain that the indices being a multiple of three, e.g. 10312 [1,00012], 10612 [1,000,00012], 10912 [1,000,000,00012] results, in these examples, in a great gross, a great-great-gross, and a great-great-great-gross, respectively.
Duodecimal numbers
In this system, the prefix e- is added for fractions.
As numbers get larger (or fractions smaller), the last two morphemes are successively replaced with tri-mo, quad-mo, penta-mo, and so on.
Multiple digits in this series are pronounced differently: 12 is "do two"; 30 is "three do"; 100 is "gro"; 9 is "el gro dek do nine"; 86 is "el gro eight do six"; 8,15 is "eight gro el do el, one gro five do dek"; ABA is "dek gro el do dek"; BBB is "el gro el do el"; 0.06 is "six egro"; and so on.
Systematic Dozenal Nomenclature (SDN)
This system uses "-qua" ending for the positive powers of 12 and "-cia" ending for the negative powers of 12, and an extension of the IUPAC systematic element names (with syllables dec and lev for the two extra digits needed for duodecimal) to express which power is meant.
After hex-, further prefixes continue sept-, oct-, enn-, dec-, lev-, unnil-, unun-.
Advocacy and "dozenalism"
William James Sidis used 12 as the base for his constructed language Vendergood in 1906, noting it being the smallest number with four factors and its prevalence in commerce.
The case for the duodecimal system was put forth at length in Frank Emerson Andrews' 1935 book New Numbers: How Acceptance of a Duodecimal Base Would Simplify Mathematics. Emerson noted that, due to the prevalence of factors of twelve in many traditional units of weight and measure, many of the computational advantages claimed for the metric system could be realized either by the adoption of ten-based weights and measure or by the adoption of the duodecimal number system.
Both the Dozenal Society of America and the Dozenal Society of Great Britain promote widespread adoption of the duodecimal system. They use the word "dozenal" instead of "duodecimal" to avoid the more overtly decimal terminology. However, the etymology of "dozenal" itself is also an expression based on decimal terminology since "dozen" is a direct derivation of the French word douzaine, which is a derivative of the French word for twelve, douze, descended from Latin duodecim.
Mathematician and mental calculator Alexander Craig Aitken was an outspoken advocate of duodecimal:
In media
In "Little Twelvetoes," an episode of the American educational television series Schoolhouse Rock!, a farmer encounters an alien being with twelve fingers on each hand and twelve toes on each foot who uses duodecimal arithmetic. The alien uses "dek" and "el" as names for ten and eleven, and Andrews' script-X and script-E for the digit symbols.
Duodecimal systems of measurements
Systems of measurement proposed by dozenalists include:
Tom Pendlebury's TGM system
Takashi Suga's Universal Unit System
John Volan's Primel system
Comparison to other number systems
In this section, numerals are in decimal. For example, "10" means 9+1, and "12" means 9+3.
The Dozenal Society of America argues that if a base is too small, significantly longer expansions are needed for numbers; if a base is too large, one must memorise a large multiplication table to perform arithmetic. Thus, it presumes that "a number base will need to be between about 7 or 8 through about 16, possibly including 18 and 20".
The number 12 has six factors, which are 1, 2, 3, 4, 6, and 12, of which 2 and 3 are prime. It is the smallest number to have six factors, the largest number to have at least half of the numbers below it as divisors, and is only slightly larger than 10. (The numbers 18 and 20 also have six factors but are much larger.) Ten, in contrast, only has four factors, which are 1, 2, 5, and 10, of which 2 and 5 are prime. Six shares the prime factors 2 and 3 with twelve; however, like ten, six only has four factors (1, 2, 3, and 6) instead of six. Its corresponding base, senary, is below the DSA's stated threshold.
Eight and sixteen only have 2 as a prime factor. Therefore, in octal and hexadecimal, the only terminating fractions are those whose denominator is a power of two.
Thirty is the smallest number that has three different prime factors (2, 3, and 5, the first three primes), and it has eight factors in total (1, 2, 3, 5, 6, 10, 15, and 30). Sexagesimal was actually used by the ancient Sumerians and Babylonians, among others; its base, sixty, adds the four convenient factors 4, 12, 20, and 60 to 30 but no new prime factors. The smallest number that has four different prime factors is 210; the pattern follows the primorials. However, these numbers are quite large to use as bases, and are far beyond the DSA's stated threshold.
In all base systems, there are similarities to the representation of multiples of numbers that are one less than or one more than the base.In the following multiplication table, numerals are written in duodecimal. For example, "10" means twelve, and "12" means fourteen.
Conversion tables to and from decimal
To convert numbers between bases, one can use the general conversion algorithm (see the relevant section under positional notation). Alternatively, one can use digit-conversion tables. The ones provided below can be used to convert any duodecimal number between 0;1 and ,; to decimal, or any decimal number between 0.1 and 99,999.9 to duodecimal. To use them, the given number must first be decomposed into a sum of numbers with only one significant digit each. For example:
12,345.6 = 10,000 + 2,000 + 300 + 40 + 5 + 0.6
This decomposition works the same no matter what base the number is expressed in. Just isolate each non-zero digit, padding them with as many zeros as necessary to preserve their respective place values. If the digits in the given number include zeroes (for example, 7,080.9), these are left out in the digit decomposition (7,080.9 = 7,000 + 80 + 0.9). Then, the digit conversion tables can be used to obtain the equivalent value in the target base for each digit. If the given number is in duodecimal and the target base is decimal, we get:
(duodecimal) 10,000 + 2,000 + 300 + 40 + 5 + 0;6 = (decimal) 20,736 + 3,456 + 432 + 48 + 5 + 0.5
Because the summands are already converted to decimal, the usual decimal arithmetic is used to perform the addition and recompose the number, arriving at the conversion result:
Duodecimal ---> Decimal
10,000 = 20,736
2,000 = 3,456
300 = 432
40 = 48
5 = 5
+ 0;6 = + 0.5
-----------------------------
12,345;6 = 24,677.5
That is, (duodecimal) 12,345;6 equals (decimal) 24,677.5
If the given number is in decimal and the target base is duodecimal, the method is same. Using the digit conversion tables:
(decimal) 10,000 + 2,000 + 300 + 40 + 5 + 0.6 = (duodecimal) 5,954 + 1,18 + 210 + 34 + 5 + 0;
To sum these partial products and recompose the number, the addition must be done with duodecimal rather than decimal arithmetic:
Decimal --> Duodecimal
10,000 = 5,954
2,000 = 1,18
300 = 210
40 = 34
5 = 5
+ 0.6 = + 0;
-------------------------------
12,345.6 = 7,189;
That is, (decimal) 12,345.6 equals (duodecimal) 7,189;
Duodecimal to decimal digit conversion
Decimal to duodecimal digit conversion
Fractions and irrational numbers
Fractions
Duodecimal fractions for rational numbers with 3-smooth denominators terminate:
= 0;6
= 0;4
= 0;3
= 0;2
= 0;16
= 0;14
= 0;1 (this is one twelfth, is one tenth)
= 0;09 (this is one sixteenth, is one fourteenth)
while other rational numbers have recurring duodecimal fractions:
= 0;
= 0;
= 0;1 (one tenth)
= 0; (one eleventh)
= 0; (one thirteenth)
= 0;0 (one fourteenth)
= 0;0 (one fifteenth)
As explained in recurring decimals, whenever an irreducible fraction is written in radix point notation in any base, the fraction can be expressed exactly (terminates) if and only if all the prime factors of its denominator are also prime factors of the base.
Because in the decimal system, fractions whose denominators are made up solely of multiples of 2 and 5 terminate: = , = , and = can be expressed exactly as 0.125, 0.05, and 0.002 respectively. and , however, recur (0.333... and 0.142857142857...).
Because in the duodecimal system, is exact; and recur because they include 5 as a factor; is exact, and recurs, just as it does in decimal.
The number of denominators that give terminating fractions within a given number of digits, , in a base is the number of factors (divisors) of , the th power of the base (although this includes the divisor 1, which does not produce fractions when used as the denominator). The number of factors of is given using its prime factorization.
For decimal, . The number of divisors is found by adding one to each exponent of each prime and multiplying the resulting quantities together, so the number of factors of is .
For example, the number 8 is a factor of 103 (1000), so and other fractions with a denominator of 8 cannot require more than three fractional decimal digits to terminate.
For duodecimal, . This has divisors. The sample denominator of 8 is a factor of a gross (in decimal), so eighths cannot need more than two duodecimal fractional places to terminate.
Because both ten and twelve have two unique prime factors, the number of divisors of for grows quadratically with the exponent (in other words, of the order of ).
Recurring digits
The Dozenal Society of America argues that factors of 3 are more commonly encountered in real-life division problems than factors of 5. Thus, in practical applications, the nuisance of repeating decimals is encountered less often when duodecimal notation is used. Advocates of duodecimal systems argue that this is particularly true of financial calculations, in which the twelve months of the year often enter into calculations.
However, when recurring fractions do occur in duodecimal notation, they are less likely to have a very short period than in decimal notation, because 12 (twelve) is between two prime numbers, 11 (eleven) and 13 (thirteen), whereas ten is adjacent to the composite number 9. Nonetheless, having a shorter or longer period does not help the main inconvenience that one does not get a finite representation for such fractions in the given base (so rounding, which introduces inexactitude, is necessary to handle them in calculations), and overall one is more likely to have to deal with infinite recurring digits when fractions are expressed in decimal than in duodecimal, because one out of every three consecutive numbers contains the prime factor 3 in its factorization, whereas only one out of every five contains the prime factor 5. All other prime factors, except 2, are not shared by either ten or twelve, so they do not
influence the relative likeliness of encountering recurring digits (any irreducible fraction that contains any of these other factors in its denominator will recur in either base).
Also, the prime factor 2 appears twice in the factorization of twelve, whereas only once in the factorization of ten; which means that most fractions whose denominators are powers of two will have a shorter, more convenient terminating representation in duodecimal than in decimal:
1/(22) = 0.2510 = 0.312
1/(23) = 0.12510 = 0.1612
1/(24) = 0.062510 = 0.0912
1/(25) = 0.0312510 = 0.04612
The duodecimal period length of 1/n are (in decimal)
0, 0, 0, 0, 4, 0, 6, 0, 0, 4, 1, 0, 2, 6, 4, 0, 16, 0, 6, 4, 6, 1, 11, 0, 20, 2, 0, 6, 4, 4, 30, 0, 1, 16, 12, 0, 9, 6, 2, 4, 40, 6, 42, 1, 4, 11, 23, 0, 42, 20, 16, 2, 52, 0, 4, 6, 6, 4, 29, 4, 15, 30, 6, 0, 4, 1, 66, 16, 11, 12, 35, 0, ...
The duodecimal period length of 1/(nth prime) are (in decimal)
0, 0, 4, 6, 1, 2, 16, 6, 11, 4, 30, 9, 40, 42, 23, 52, 29, 15, 66, 35, 36, 26, 41, 8, 16, 100, 102, 53, 54, 112, 126, 65, 136, 138, 148, 150, 3, 162, 83, 172, 89, 90, 95, 24, 196, 66, 14, 222, 113, 114, 8, 119, 120, 125, 256, 131, 268, 54, 138, 280, ...
Smallest prime with duodecimal period n are (in decimal)
11, 13, 157, 5, 22621, 7, 659, 89, 37, 19141, 23, 20593, 477517, 211, 61, 17, 2693651, 1657, 29043636306420266077, 85403261, 8177824843189, 57154490053, 47, 193, 303551, 79, 306829, 673, 59, 31, 373, 153953, 886381, 2551, 71, 73, ...
Irrational numbers
The representations of irrational numbers in any positional number system (including decimal and duodecimal) neither terminate nor repeat. The following table gives the first digits for some important algebraic and transcendental numbers in both decimal and duodecimal.
See also
Vigesimal (base 20)
Sexagesimal (base 60)
References
External links
Dozenal Society of America
"The DSA Symbology Synopsis"
"Resources", the DSA website's page of external links to third-party tools
Dozenal Society of Great Britain
Positional numeral systems
12 (number) | Duodecimal | [
"Mathematics"
] | 5,572 | [
"Numeral systems",
"Positional numeral systems"
] |
8,407 | https://en.wikipedia.org/wiki/Dodecahedron | In geometry, a dodecahedron (; ) or duodecahedron is any polyhedron with twelve flat faces. The most familiar dodecahedron is the regular dodecahedron with regular pentagons as faces, which is a Platonic solid. There are also three regular star dodecahedra, which are constructed as stellations of the convex form. All of these have icosahedral symmetry, order 120.
Some dodecahedra have the same combinatorial structure as the regular dodecahedron (in terms of the graph formed by its vertices and edges), but their pentagonal faces are not regular:
The pyritohedron, a common crystal form in pyrite, has pyritohedral symmetry, while the tetartoid has tetrahedral symmetry.
The rhombic dodecahedron can be seen as a limiting case of the pyritohedron, and it has octahedral symmetry. The elongated dodecahedron and trapezo-rhombic dodecahedron variations, along with the rhombic dodecahedra, are space-filling. There are numerous other dodecahedra.
While the regular dodecahedron shares many features with other Platonic solids, one unique property of it is that one can start at a corner of the surface and draw an infinite number of straight lines across the figure that return to the original point without crossing over any other corner.
Regular dodecahedron
The convex regular dodecahedron is one of the five regular Platonic solids and can be represented by its Schläfli symbol {5, 3}.
The dual polyhedron is the regular icosahedron {3, 5}, having five equilateral triangles around each vertex.
The convex regular dodecahedron also has three stellations, all of which are regular star dodecahedra. They form three of the four Kepler–Poinsot polyhedra. They are the small stellated dodecahedron {5/2, 5}, the great dodecahedron {5, 5/2}, and the great stellated dodecahedron {5/2, 3}. The small stellated dodecahedron and great dodecahedron are dual to each other; the great stellated dodecahedron is dual to the great icosahedron {3, 5/2}. All of these regular star dodecahedra have regular pentagonal or pentagrammic faces. The convex regular dodecahedron and great stellated dodecahedron are different realisations of the same abstract regular polyhedron; the small stellated dodecahedron and great dodecahedron are different realisations of another abstract regular polyhedron.
Other pentagonal dodecahedra
In crystallography, two important dodecahedra can occur as crystal forms in some symmetry classes of the cubic crystal system that are topologically equivalent to the regular dodecahedron but less symmetrical: the pyritohedron with pyritohedral symmetry, and the tetartoid with tetrahedral symmetry:
Pyritohedron
A pyritohedron is a dodecahedron with pyritohedral (Th) symmetry. Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices (see figure). However, the pentagons are not constrained to be regular, and the underlying atomic arrangement has no true fivefold symmetry axis. Its 30 edges are divided into two sets – containing 24 and 6 edges of the same length. The only axes of rotational symmetry are three mutually perpendicular twofold axes and four threefold axes.
Although regular dodecahedra do not exist in crystals, the pyritohedron form occurs in the crystals of the mineral pyrite, and it may be an inspiration for the discovery of the regular Platonic solid form. The true regular dodecahedron can occur as a shape for quasicrystals (such as holmium–magnesium–zinc quasicrystal) with icosahedral symmetry, which includes true fivefold rotation axes.
Crystal pyrite
The name crystal pyrite comes from one of the two common crystal habits shown by pyrite (the other one being the cube). In pyritohedral pyrite, the faces have a Miller index of (210), which means that the dihedral angle is 2·arctan(2) ≈ 126.87° and each pentagonal face has one angle of approximately 121.6° in between two angles of approximately 106.6° and opposite two angles of approximately 102.6°. The following formulas show the measurements for the face of a perfect crystal (which is rarely found in nature).
Cartesian coordinates
The eight vertices of a cube have the coordinates (±1, ±1, ±1).
The coordinates of the 12 additional vertices are
(0, ±(1 + h), ±(1 − h2)),
(±(1 + h), ±(1 − h2), 0) and
(±(1 − h2), 0, ±(1 + h)).
h is the height of the wedge-shaped "roof" above the faces of that cube with edge length 2.
An important case is h = (a quarter of the cube edge length) for perfect natural pyrite (also the pyritohedron in the Weaire–Phelan structure).
Another one is h = = 0.618... for the regular dodecahedron. See section Geometric freedom for other cases.
Two pyritohedra with swapped nonzero coordinates are in dual positions to each other like the dodecahedra in the compound of two dodecahedra.
Geometric freedom
The pyritohedron has a geometric degree of freedom with limiting cases of a cubic convex hull at one limit of collinear edges, and a rhombic dodecahedron as the other limit as 6 edges are degenerated to length zero. The regular dodecahedron represents a special intermediate case where all edges and angles are equal.
It is possible to go past these limiting cases, creating concave or nonconvex pyritohedra. The endo-dodecahedron is concave and equilateral; it can tessellate space with the convex regular dodecahedron. Continuing from there in that direction, we pass through a degenerate case where twelve vertices coincide in the centre, and on to the regular great stellated dodecahedron where all edges and angles are equal again, and the faces have been distorted into regular pentagrams. On the other side, past the rhombic dodecahedron, we get a nonconvex equilateral dodecahedron with fish-shaped self-intersecting equilateral pentagonal faces.
Tetartoid
A tetartoid (also tetragonal pentagonal dodecahedron, pentagon-tritetrahedron, and tetrahedric pentagon dodecahedron) is a dodecahedron with chiral tetrahedral symmetry (T). Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices. However, the pentagons are not regular and the figure has no fivefold symmetry axes.
Although regular dodecahedra do not exist in crystals, the tetartoid form does. The name tetartoid comes from the Greek root for one-fourth because it has one fourth of full octahedral symmetry, and half of pyritohedral symmetry. The mineral cobaltite can have this symmetry form.
Abstractions sharing the solid's topology and symmetry can be created from the cube and the tetrahedron. In the cube each face is bisected by a slanted edge. In the tetrahedron each edge is trisected, and each of the new vertices connected to a face center. (In Conway polyhedron notation this is a gyro tetrahedron.)
Cartesian coordinates
The following points are vertices of a tetartoid pentagon under tetrahedral symmetry:
(a, b, c); (−a, −b, c); (−, −, ); (−c, −a, b); (−, , ),
under the following conditions:
,
n = a2c − bc2,
d1 = a2 − ab + b2 + ac − 2bc,
d2 = a2 + ab + b2 − ac − 2bc,
.
Geometric freedom
The regular dodecahedron is a tetartoid with more than the required symmetry. The triakis tetrahedron is a degenerate case with 12 zero-length edges. (In terms of the colors used above this means, that the white vertices and green edges are absorbed by the green vertices.)
Dual of triangular gyrobianticupola
A lower symmetry form of the regular dodecahedron can be constructed as the dual of a polyhedron constructed from two triangular anticupola connected base-to-base, called a triangular gyrobianticupola. It has D3d symmetry, order 12. It has 2 sets of 3 identical pentagons on the top and bottom, connected 6 pentagons around the sides which alternate upwards and downwards. This form has a hexagonal cross-section and identical copies can be connected as a partial hexagonal honeycomb, but all vertices will not match.
Rhombic dodecahedron
The rhombic dodecahedron is a zonohedron with twelve rhombic faces and octahedral symmetry. It is dual to the quasiregular cuboctahedron (an Archimedean solid) and occurs in nature as a crystal form. The rhombic dodecahedron packs together to fill space.
The rhombic dodecahedron can be seen as a degenerate pyritohedron where the 6 special edges have been reduced to zero length, reducing the pentagons into rhombic faces.
The rhombic dodecahedron has several stellations, the first of which is also a parallelohedral spacefiller.
Another important rhombic dodecahedron, the Bilinski dodecahedron, has twelve faces congruent to those of the rhombic triacontahedron, i.e. the diagonals are in the ratio of the golden ratio. It is also a zonohedron and was described by Bilinski in 1960. This figure is another spacefiller, and can also occur in non-periodic spacefillings along with the rhombic triacontahedron, the rhombic icosahedron and rhombic hexahedra.
Other dodecahedra
There are 6,384,634 topologically distinct convex dodecahedra, excluding mirror images—the number of vertices ranges from 8 to 20. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.)
Topologically distinct dodecahedra (excluding pentagonal and rhombic forms)
Uniform polyhedra:
Decagonal prism – 10 squares, 2 decagons, D10h symmetry, order 40.
Pentagonal antiprism – 10 equilateral triangles, 2 pentagons, D5d symmetry, order 20
Johnson solids (regular faced):
Pentagonal cupola – 5 triangles, 5 squares, 1 pentagon, 1 decagon, C5v symmetry, order 10
Snub disphenoid – 12 triangles, D2d, order 8
Elongated square dipyramid – 8 triangles and 4 squares, D4h symmetry, order 16
Metabidiminished icosahedron – 10 triangles and 2 pentagons, C2v symmetry, order 4
Congruent irregular faced: (face-transitive)
Hexagonal bipyramid – 12 isosceles triangles, dual of hexagonal prism, D6h symmetry, order 24
Hexagonal trapezohedron – 12 kites, dual of hexagonal antiprism, D6d symmetry, order 24
Triakis tetrahedron – 12 isosceles triangles, dual of truncated tetrahedron, Td symmetry, order 24
Other less regular faced:
Hendecagonal pyramid – 11 isosceles triangles and 1 regular hendecagon, C11v, order 11
Trapezo-rhombic dodecahedron – 6 rhombi, 6 trapezoids – dual of triangular orthobicupola, D3h symmetry, order 12
Rhombo-hexagonal dodecahedron or elongated Dodecahedron – 8 rhombi and 4 equilateral hexagons, D4h symmetry, order 16
Truncated pentagonal trapezohedron, D5d, order 20, topologically equivalent to regular dodecahedron
Practical usage
Armand Spitz used a dodecahedron as the "globe" equivalent for his Digital Dome planetarium projector, based upon a suggestion from Albert Einstein.
Regular dodecahedrons are sometimes used as dice, when they are known as d12s, especially in games such as Dungeons and Dragons.
See also
120-cell – a regular polychoron (4D polytope) whose surface consists of 120 dodecahedral cells
– a dodecahedron shaped coccolithophore (a unicellular phytoplankton algae)
Pentakis dodecahedron
Roman dodecahedron
Snub dodecahedron
Truncated dodecahedron
References
External links
Plato's Fourth Solid and the "Pyritohedron", by Paul Stephenson, 1993, The Mathematical Gazette, Vol. 77, No. 479 (Jul., 1993), pp. 220–226
Stellation of Pyritohedron VRML models and animations of Pyritohedron and its stellations
Editable printable net of a dodecahedron with interactive 3D view
The Uniform Polyhedra
Origami Polyhedra – Models made with Modular Origami
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra
Dodecahedron 3D Visualization
Stella: Polyhedron Navigator: Software used to create some of the images on this page.
How to make a dodecahedron from a Styrofoam cube
Individual graphs
Planar graphs
Platonic solids
12 (number) | Dodecahedron | [
"Mathematics"
] | 2,978 | [
"Planes (geometry)",
"Planar graphs"
] |
8,410 | https://en.wikipedia.org/wiki/Decibel | The decibel (symbol: dB) is a relative unit of measurement equal to one tenth of a bel (B). It expresses the ratio of two values of a power or root-power quantity on a logarithmic scale. Two signals whose levels differ by one decibel have a power ratio of 101/10 (approximately ) or root-power ratio of 101/20 (approximately ).
The unit fundamentally expresses a relative change but may also be used to express an absolute value as the ratio of a value to a fixed reference value; when used in this way, the unit symbol is often suffixed with letter codes that indicate the reference value. For example, for the reference value of 1 volt, a common suffix is "V" (e.g., "20 dBV").
Two principal types of scaling of the decibel are in common use. When expressing a power ratio, it is defined as ten times the logarithm with base 10. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level. When expressing root-power quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two, so that the related power and root-power levels change by the same value in linear systems, where power is proportional to the square of amplitude.
The definition of the decibel originated in the measurement of transmission loss and power in telephony of the early 20th century in the Bell System in the United States. The bel was named in honor of Alexander Graham Bell, but the bel is seldom used. Instead, the decibel is used for a wide variety of measurements in science and engineering, most prominently for sound power in acoustics, in electronics and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels.
History
The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. Until the mid-1920s, the unit for loss was miles of standard cable (MSC). 1 MSC corresponded to the loss of power over one mile (approximately 1.6 km) of standard telephone cable at a frequency of radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to a listener. A standard telephone cable was "a cable having uniformly distributed resistance of 88 ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire).
In 1924, Bell Telephone Laboratories received a favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power.
The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell.
The bel is seldom used, as the decibel was the proposed working unit.
The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931:
In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name logit for "standard magnitudes which combine by multiplication", to contrast with the name unit for "standard magnitudes which combine by addition".
In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with root-power quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO.
Definition
The IEC Standard 60027-3:2002 defines the following quantities. The decibel (dB) is one-tenth of a bel: . The bel (B) is ln(10) nepers: . The neper is the change in the level of a root-power quantity when the root-power quantity changes by a factor of e, that is , thereby relating all of the units as nondimensional natural log of root-power-quantity ratios, = = . Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity.
Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two root-power quantities of :1.
Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately , and an amplitude (root-power quantity) ratio of 101/20 ().
The bel is rarely used either without a prefix or with SI unit prefixes other than deci; it is customary, for example, to use hundredths of a decibel rather than millibels. Thus, five one-thousandths of a bel would normally be written 0.05 dB, and not 5 mB.
The method of expressing a ratio as a level in decibels depends on whether the measured property is a power quantity or a root-power quantity; see Power, root-power, and field quantities for details.
Power quantities
When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of P (measured power) to P0 (reference power) is represented by LP, that ratio expressed in decibels, which is calculated using the formula:
The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). P and P0 must measure the same type of quantity, and have the same units before calculating the ratio. If in the above equation, then LP = 0. If P is greater than P0 then LP is positive; if P is less than P0 then LP is negative.
Rearranging the above equation gives the following formula for P in terms of P0 and LP :
Root-power (field) quantities
When referring to measurements of root-power quantities, it is usual to consider the ratio of the squares of F (measured) and F0 (reference). This is because the definitions were originally formulated to give the same value for relative ratios for both power and root-power quantities. Thus, the following definition is used:
The formula may be rearranged to give
Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level LG:
where Vout is the root-mean-square (rms) output voltage, Vin is the rms input voltage. A similar formula holds for current.
The term root-power quantity is introduced by ISO Standard 80000-1:2009 as a substitute of field quantity. The term field quantity is deprecated by that standard and root-power is used throughout this article.
Relationship between power and root-power levels
Although power and root-power quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make changes in the respective levels match under restricted conditions such as when the medium is linear and the same waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship
holding. In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes.
For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities P and F need not be related), or equivalently,
must hold to allow the power level difference to be equal to the root-power level difference from power P and F to P and F. An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated root-power quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently.
Conversions
Since logarithm differences measured in these units often represent power ratios and root-power ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic root-power (amplitude) ratio.
Examples
The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a reference point.
Calculating the ratio in decibels of (one kilowatt, or watts) to yields:
The ratio in decibels of to is:
, illustrating the consequence from the definitions above that LG has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared.
The ratio in decibels of to (one milliwatt) is obtained with the formula:
The power ratio corresponding to a change in level is given by:
A change in power ratio by a factor of 10 corresponds to a change in level of . A change in power ratio by a factor of 2 or is approximately a change of 3 dB. More precisely, the change is ± dB, but this is almost universally rounded to 3 dB in technical writing. This implies an increase in voltage by a factor of . Likewise, a doubling or halving of the voltage, corresponding to a quadrupling or quartering of the power, is commonly described as 6 dB rather than ± dB.
Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 103/10, or , about 0.24% different from exactly 2, and a voltage ratio of , about 0.12% different from exactly . Similarly, an increase of 6.000 dB corresponds to a power ratio of , about 0.5% different from 4.
Properties
The decibel is useful for representing large ratios and for simplifying representation of multiplicative effects, such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive, such as in the combined sound pressure level of two machines operating together. Care is also necessary with decibels directly in fractions and with the units of multiplicative operations.
Reporting large ratios
The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See Bode plot and Semi-log plot. For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing".
Representation of multiplication operations
Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, = log(A) + log(B) + log(C). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example:
A system consists of 3 amplifiers in series, with gains (ratio of power out to in) of 10 dB, 8 dB, and 7 dB respectively, for a total gain of 25 dB. Broken into combinations of 10, 3, and 1 dB, this is: With an input of 1 watt, the output is approximately Calculated precisely, the output is 1 W × 1025/10 ≈ 316.2 W. The approximate value has an error of only +0.4% with respect to the actual value, which is negligible given the precision of the values supplied and the accuracy of most measurement instrumentation.
However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret.
Quantities in decibels are not necessarily additive, thus being "of unacceptable form for use in dimensional analysis".
Thus, units require special care in decibel operations. Take, for example, carrier-to-noise-density ratio C/N0 (in hertz), involving carrier power C (in watts) and noise power spectral density N0 (in W/Hz). Expressed in decibels, this ratio would be a subtraction (C/N0)dB = CdB − N0 dB. However, the linear-scale units still simplify in the implied fraction, so that the results would be expressed in dB-Hz.
Representation of addition operations
According to Mitschke, "The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations:if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!; suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA.; in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB.
Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations:
The logarithmic mean is obtained from the logarithmic sum by subtracting , since logarithmic division is linear subtraction.
Fractions
Attenuation constants, in topics such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. In this case, dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km.
Uses
Perception
The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (see Weber–Fechner law), making the dB scale a useful measure.
Acoustics
The decibel is commonly used in acoustics as a unit of sound power level or sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. As sound pressure is a root-power quantity, the appropriate version of the unit definition is used:
where prms is the root mean square of the measured sound pressure and pref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water.
Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value.
Sound intensity is proportional to the square of sound pressure. Therefore, the sound intensity level can also be defined as:
The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is equal to or greater than 1 trillion (1012). Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound intensity level of 120 dB re 1 pW/m2. The reference values of I and p in air have been chosen such that this corresponds approximately to a sound pressure level of 120 dB re 20 μPa.
Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified by frequency weighting (A-weighting being the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels.
Telephony
The decibel is used in telephony and audio. Similarly to the use in acoustics, a frequency weighted power is often used. For audio noise measurements in electrical circuits, the weightings are called psophometric weightings.
Electronics
In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget.
The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW).
In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or ≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical.
Optics
In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities.
In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B.
Video and digital imaging
In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity.
Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest.
Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear.
However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value.
Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range.
Suffixes and reference values
Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt.
In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative.
This form of attaching suffixes to dB is widespread in practice, albeit being against the rules promulgated by standards bodies (ISO and IEC), given the "unacceptability of attaching information to units" and the "unacceptability of mixing information with units". The IEC 60027-3 standard recommends the following format: or as , where x is the quantity symbol and xref is the value of the reference quantity, e.g., = 20 dB or = 20 dB for the electric field strength E relative to 1 μV/m reference value.
If the measurement result 20 dB is presented separately, it can be specified using the information in parentheses, which is then part of the surrounding text and not a part of the unit: 20 dB (re: 1 μV/m) or 20 dB (1 μV/m).
Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dBHz", or with a space, as in "dB HL", or enclosed in parentheses, as in "dB(HL)", or with no intervening character, as in "dBm" (which is non-compliant with international standards).
List of suffixes
Voltage
Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above.
dB dB(VRMS) – voltage relative to 1 volt, regardless of impedance. This is used to measure microphone sensitivity, and also to specify the consumer line-level of , in order to reduce manufacturing costs relative to equipment using a line-level signal.
dB or dB RMS voltage relative to (i.e. the voltage that would dissipate 1 mW into a 600 Ω load). An RMS voltage of 1 V therefore corresponds to Originally dB, it was changed to dB to avoid confusion with dB. The v comes from volt, while u comes from the volume unit displayed on a VU meter.dB can be used as a measure of voltage, regardless of impedance, but is derived from a 600 Ω load dissipating 0 dB (1 mW). The reference voltage comes from the computation where is the resistance and is the power.
In professional audio, equipment may be calibrated to indicate a "0" on the VU meters some finite time after a signal has been applied at an amplitude of . Consumer equipment typically uses a lower "nominal" signal level of Therefore, many devices offer dual voltage operation (with different gain or "trim" settings) for interoperability reasons. A switch or adjustment that covers at least the range between and is common in professional equipment.
dB Defined by Recommendation ITU-R V.574 ; dB: dB(mVRMS) – root mean square voltage relative to 1 millivolt across 75 Ω. Widely used in cable television networks, where the nominal strength of a single TV signal at the receiver terminals is about 0 dB. Cable TV uses 75 Ω coaxial cable, so 0 dB corresponds to −78.75 dB or approximately 13 nW.
dB or dB dB(μVRMS) – voltage relative to 1 microvolt. Widely used in television and aerial amplifier specifications. 60 dBμV = 0 dB.
Acoustics
Probably the most common usage of "decibels" in reference to sound level is dB, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a root-power quantity) use the factor of 20, and the measures of power (e.g. dB and dB) use the factor of 10.
dB dB (sound pressure level) – for sound in air and other gases, relative to 20 micropascals (μPa), or , approximately the quietest sound a human can hear. For sound in water and other liquids, a reference pressure of 1 μPa is used. An RMS sound pressure of one pascal corresponds to a level of 94 dB SPL.
dB dB sound intensity level – relative to 10−12 W/m2, which is roughly the threshold of human hearing in air.
dB dB sound power level – relative to 10−12 W.
dB, dB, and dB These symbols are often used to denote the use of different weighting filters, used to approximate the human ear's response to sound, although the measurement is still in dB (SPL). These measurements usually refer to noise and its effects on humans and other animals, and they are widely used in industry while discussing noise control issues, regulations and environmental standards. Other variations that may be seen are dB or dB(A). According to standards from the International Electro-technical Committee (IEC 61672-2013) and the American National Standards Institute, ANSI S1.4, the preferred usage is to write Nevertheless, the units dB and dB(A) are still commonly used as a shorthand for Aweighted measurements. Compare dB, used in telecommunications.
dB dB hearing level is used in audiograms as a measure of hearing loss. The reference level varies with frequency according to a minimum audibility curve as defined in ANSI and other standards, such that the resulting audiogram shows deviation from what is regarded as 'normal' hearing.
dB sometimes used to denote weighted noise level, commonly using the ITU-R 468 noise weighting
dB relative to the peak to peak sound pressure.
dB G‑weighted spectrum
Audio electronics
See also dB and dB above.
dB dB(mW) – power relative to 1 milliwatt. In audio and telephony, dB is typically referenced relative to a 600 Ω impedance, which corresponds to a voltage level of 0.775 volts or 775 millivolts.
dB Power in dB (described above) measured at a zero transmission level point.
dB dB(full scale) – the amplitude of a signal compared with the maximum which a device can handle before clipping occurs. Full-scale may be defined as the power level of a full-scale sinusoid or alternatively a full-scale square wave. A signal measured with reference to a full-scale sine-wave appears 3 dB weaker when referenced to a full-scale square wave, thus: 0 dBFS(fullscale sine wave) = −3 dB (fullscale square wave).
dB dB volume unit
dB dB(true peak) – peak amplitude of a signal compared with the maximum which a device can handle before clipping occurs. In digital systems, 0 dB would equal the highest level (number) the processor is capable of representing. Measured values are always negative or zero, since they are less than or equal to full-scale.
Radar
dB dB(Z) – decibel relative to Z = 1 mm⋅m: energy of reflectivity (weather radar), related to the amount of transmitted power returned to the radar receiver. Values above 20 dB usually indicate falling precipitation.
dB dB(m²) – decibel relative to one square meter: measure of the radar cross section (RCS) of a target. The power reflected by the target is proportional to its RCS. "Stealth" aircraft and insects have negative RCS measured in dB, large flat plates or non-stealthy aircraft have positive values.
Radio power, energy, and field strength
dB relative to carrier – in telecommunications, this indicates the relative levels of noise or sideband power, compared with the carrier power. Compare dB, used in acoustics.
dB relative to the maximum value of the peak power.
dB energy relative to 1 joule. 1 joule = 1 watt second = 1 watt per hertz, so power spectral density can be expressed in dB.
dB dB(mW) – power relative to 1 milliwatt. In the radio field, dB is usually referenced to a 50 Ω load, with the resultant voltage being 0.224 volts.
dB, dB, or dB dB(μV/m) – electric field strength relative to 1 microvolt per meter. The unit is often used to specify the signal strength of a television broadcast at a receiving site (the signal measured at the antenna output is reported in dBμ).
dB dB(fW) – power relative to 1 femtowatt.
dB dB(W) – power relative to 1 watt.
dB dB(kW) – power relative to 1 kilowatt.
dB dB electrical.
dB dB optical. A change of 1 dB in optical power can result in a change of up to 2 dB in electrical signal power in a system that is thermal noise limited.
Antenna measurements
dB dB(isotropic) – the gain of an antenna compared with the gain of a theoretical isotropic antenna, which uniformly distributes energy in all directions. Linear polarization of the EM field is assumed unless noted otherwise.
dB dB(dipole) – the gain of an antenna compared with the gain a half-wave dipole antenna. 0 dBd = 2.15 dBi
dB dB(isotropic circular) – the gain of an antenna compared to the gain of a theoretical circularly polarized isotropic antenna. There is no fixed conversion rule between dB and dB, as it depends on the receiving antenna and the field polarization.
dB dB(quarterwave) – the gain of an antenna compared to the gain of a quarter wavelength whip. Rarely used, except in some marketing material;
dB dB, dB(m²) – decibels relative to one square meter: A measure of the effective area for capturing signals of the antenna.
dB dB(m) – decibels relative to reciprocal of meter: measure of the antenna factor.
Other measurements
dB or dB‑Hz dB(Hz) – bandwidth relative to one hertz. E.g., 20 dBHz corresponds to a bandwidth of 100 Hz. Commonly used in link budget calculations. Also used in carrier-to-noise-density ratio (not to be confused with carrier-to-noise ratio, in dB).
dB or dB dB(overload) – the amplitude of a signal (usually audio) compared with the maximum which a device can handle before clipping occurs. Similar to dB FS, but also applicable to analog systems. According to ITU-T Rec. G.100.1 the level in dB ov of a digital system is defined as: with the maximum signal power for a rectangular signal with the maximum amplitude The level of a tone with a digital amplitude (peak value) of is therefore
dB dB(relative) – simply a relative difference from something else, which is made apparent in context. The difference of a filter's response to nominal levels, for instance.
dB dB above reference noise. See also dB
dB dB(rnC) represents an audio level measurement, typically in a telephone circuit, relative to a −90 dB reference level, with the measurement of this level frequency-weighted by a standard C-message weighting filter. The C-message weighting filter was chiefly used in North America. The psophometric filter is used for this purpose on international circuits.
dB dB(K) – decibels relative to 1 K; used to express noise temperature.
dB or dB dB(K⁻¹) – decibels relative to 1 K⁻¹. — not decibels per Kelvin: Used for the (G/T) factor, a figure of merit used in satellite communications, relating the antenna gain to the receiver system noise equivalent temperature .
List of suffixes in alphabetical order
Unpunctuated suffixes
dB see dB(A).
dB see dB adjusted.
dB see dB(B).
dB relative to carrier – in telecommunications, this indicates the relative levels of noise or sideband power, compared with the carrier power.
dB see dB(C).
dB see dB(D).
dB dB(dipole) – the forward gain of an antenna compared with a half-wave dipole antenna. 0 dBd = 2.15 dB
dB dB electrical.
dB dB(fW) – power relative to 1 femtowatt.
dB dB(full scale) – the amplitude of a signal compared with the maximum which a device can handle before clipping occurs. Full-scale may be defined as the power level of a full-scale sinusoid or alternatively a full-scale square wave. A signal measured with reference to a full-scale sine-wave appears 3 dB weaker when referenced to a full-scale square wave, thus: 0 dB (fullscale sine wave) = −3 dB (full-scale square wave).
dB G-weighted spectrum
dB dB(isotropic) – the forward gain of an antenna compared with the hypothetical isotropic antenna, which uniformly distributes energy in all directions. Linear polarization of the EM field is assumed unless noted otherwise.
dB dB(isotropic circular) – the forward gain of an antenna compared to a circularly polarized isotropic antenna. There is no fixed conversion rule between dB and dB, as it depends on the receiving antenna and the field polarization.
dB energy relative to 1 joule: 1 joule = 1 watt-second = 1 watt per hertz, so power spectral density can be expressed in dB.
dB dB(kW) – power relative to 1 kilowatt.
dB dB(K) – decibels relative to kelvin: Used to express noise temperature.
dB dB(mW) – power relative to 1 milliwatt.
dB or dB dB(m²) – decibel relative to one square meter
dB Power in dB measured at a zero transmission level point.
dB Defined by Recommendation ITU-R V.574.
dB dB(mVRMS) – voltage relative to 1 millivolt across 75 Ω.
dB dB optical. A change of 1 dB in optical power can result in a change of up to 2 dB in electrical signal power in system that is thermal noise limited.
dB see dB
dB or dB dB(overload) – the amplitude of a signal (usually audio) compared with the maximum which a device can handle before clipping occurs.
dB relative to the peak to peak sound pressure.
dB relative to the maximum value of the peak electrical power.
dB dB(quarterwave) – the forward gain of an antenna compared to a quarter wavelength whip. Rarely used, except in some marketing material. 0 dBq = −0.85 dB
dB dB(relative) – simply a relative difference from something else, which is made apparent in context. The difference of a filter's response to nominal levels, for instance.
dB dB above reference noise. See also dB
dB dB represents an audio level measurement, typically in a telephone circuit, relative to the circuit noise level, with the measurement of this level frequency-weighted by a standard C-message weighting filter. The C-message weighting filter was chiefly used in North America.
dB see dB
dB dB(true peak) – peak amplitude of a signal compared with the maximum which a device can handle before clipping occurs.
dB or dB RMS voltage relative to
dB Defined by Recommendation ITU-R V.574.
dB see dB
dB see dB
dB see dB
dB dB(VRMS) – voltage relative to 1 volt, regardless of impedance.
dB dB(VU) dB volume unit
dB dB(W) – power relative to 1 watt.
dB spectral density relative to 1 W·m⁻²·Hz⁻¹
dB dB(Z) – decibel relative to Z = 1 mm6⋅m−3
dB see dB
dB or dB dB(μVRMS) – voltage relative to 1 root mean square microvolt.
dB, dB, or dB dB(μV/m) – electric field strength relative to 1 microvolt per meter.
Suffixes preceded by a space
dB HL dB hearing level is used in audiograms as a measure of hearing loss.
dB Q sometimes used to denote weighted noise level
dB SIL dB sound intensity level – relative to 10−12 W/m2
dB SPL dB SPL (sound pressure level) – for sound in air and other gases, relative to 20 μPa in air or 1 μPa in water
dB SWL dB sound power level – relative to 10−12 W.
Suffixes within parentheses
dB(A), dB(B), dB(C), dB(D), dB(G), and dB(Z) These symbols are often used to denote the use of different weighting filters, used to approximate the human ear's response to sound, although the measurement is still in dB (SPL). These measurements usually refer to noise and its effects on humans and other animals, and they are widely used in industry while discussing noise control issues, regulations and environmental standards. Other variations that may be seen are dBA or dBA.
Other suffixes
dB or dB-Hz dB(Hz) – bandwidth relative to one Hertz
dB or dB dB(K⁻¹) – decibels relative to reciprocal of kelvin
dB dB(m⁻¹) – decibel relative to reciprocal of meter: measure of the antenna factor
mB mB(mW) – power relative to 1 milliwatt, in millibels (one hundredth of a decibel). 100 mB = 1 dB. This unit is in the Wi-Fi drivers of the Linux kernel and the regulatory domain sections.
See also
Apparent magnitude
Cent (music)
Day–evening–night noise level (Lden) and day-night average sound level (Ldl), European and American standards for expressing noise level over an entire day
dB drag racing
Decade (log scale)
Loudness
Neper
pH
Phon
Richter magnitude scale
Sone
Notes
References
Further reading
External links
What is a decibel? With sound files and animations
Conversion of sound level units: dBSPL or dBA to sound pressure p and sound intensity J
OSHA Regulations on Occupational Noise Exposure
Working with Decibels (RF signal and field strengths)
Acoustics
Audio electronics
Radio frequency propagation
Telecommunications engineering
Units of level | Decibel | [
"Physics",
"Mathematics",
"Engineering"
] | 8,727 | [
"Audio electronics",
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Units of level",
"Quantity",
"Classical mechanics",
"Acoustics",
"Electromagnetic spectrum",
"Waves",
"Logarithmic scales of mea... |
8,411 | https://en.wikipedia.org/wiki/Darwinism | Darwinism is a term used to describe a theory of biological evolution developed by the English naturalist Charles Darwin (1809–1882) and others. The theory states that all species of organisms arise and develop through the natural selection of small, inherited variations that increase the individual's ability to compete, survive, and reproduce. Also called Darwinian theory, it originally included the broad concepts of transmutation of species or of evolution which gained general scientific acceptance after Darwin published On the Origin of Species in 1859, including concepts which predated Darwin's theories. English biologist Thomas Henry Huxley coined the term Darwinism in April 1860.
Terminology
Darwinism subsequently referred to the specific concepts of natural selection, the Weismann barrier, or the central dogma of molecular biology. Though the term usually refers strictly to biological evolution, creationists have appropriated it to refer to the origin of life or to cosmic evolution, that are distinct to biological evolution, and therefore consider it to be the belief and acceptance of Darwin's and of his predecessors' work, in place of other concepts, including divine design and extraterrestrial origins.
English biologist Thomas Henry Huxley coined the term Darwinism in April 1860. It was used to describe evolutionary concepts in general, including earlier concepts published by English philosopher Herbert Spencer. Many of the proponents of Darwinism at that time, including Huxley, had reservations about the significance of natural selection, and Darwin himself gave credence to what was later called Lamarckism. The strict neo-Darwinism of German evolutionary biologist August Weismann gained few supporters in the late 19th century. During the approximate period of the 1880s to about 1920, sometimes called "the eclipse of Darwinism", scientists proposed various alternative evolutionary mechanisms which eventually proved untenable. The development of the modern synthesis in the early 20th century, incorporating natural selection with population genetics and Mendelian genetics, revived Darwinism in an updated form.
While the term Darwinism has remained in use amongst the public when referring to modern evolutionary theory, it has increasingly been argued by science writers such as Olivia Judson, Eugenie Scott, and Carl Safina that it is an inappropriate term for modern evolutionary theory. For example, Darwin was unfamiliar with the work of the Moravian scientist and Augustinian friar Gregor Mendel, and as a result had only a vague and inaccurate understanding of heredity. He naturally had no inkling of later theoretical developments and, like Mendel himself, knew nothing of genetic drift, for example.
In the United States and to some extent in the United Kingdom, creationists often use the term "Darwinism" as a pejorative term in reference to beliefs such as scientific materialism.
Huxley
Huxley, upon first reading Darwin's theory in 1858, responded, "How extremely stupid not to have thought of that!"
While the term Darwinism had been used previously to refer to the work of Erasmus Darwin in the late 18th century, the term as understood today was introduced when Charles Darwin's 1859 book On the Origin of Species was reviewed by Thomas Henry Huxley in the April 1860 issue of The Westminster Review. Having hailed the book as "a veritable Whitworth gun in the armoury of liberalism" promoting scientific naturalism over theology, and praising the usefulness of Darwin's ideas while expressing professional reservations about Darwin's gradualism and doubting if it could be proved that natural selection could form new species, Huxley compared Darwin's achievement to that of Nicolaus Copernicus in explaining planetary motion:
These are the basic tenets of evolution by natural selection as defined by Darwin:
More individuals are produced each generation than can survive.
Phenotypic variation exists among individuals and the variation is heritable.
Those individuals with heritable traits better suited to the environment will survive.
When reproductive isolation occurs new species will form.
Other 19th-century usage
"Darwinism" soon came to stand for an entire range of evolutionary (and often revolutionary) philosophies about both biology and society. One of the more prominent approaches, summed in the 1864 phrase "survival of the fittest" by Herbert Spencer, later became emblematic of Darwinism even though Spencer's own understanding of evolution (as expressed in 1857) was more similar to that of Jean-Baptiste Lamarck than to that of Darwin, and predated the publication of Darwin's theory in 1859. What is now called "Social Darwinism" was, in its day, synonymous with "Darwinism"—the application of Darwinian principles of "struggle" to society, usually in support of anti-philanthropic political agenda. Another interpretation, one notably favoured by Darwin's half-cousin Francis Galton, was that "Darwinism" implied that because natural selection was apparently no longer working on "civilized" people, it was possible for "inferior" strains of people (who would normally be filtered out of the gene pool) to overwhelm the "superior" strains, and voluntary corrective measures would be desirable—the foundation of eugenics.
In Darwin's day there was no rigid definition of the term "Darwinism", and it was used by opponents and proponents of Darwin's biological theory alike to mean whatever they wanted it to in a larger context. The ideas had international influence, and Ernst Haeckel developed what was known as Darwinismus in Germany, although, like Spencer's "evolution", Haeckel's "Darwinism" had only a rough resemblance to the theory of Charles Darwin, and was not centred on natural selection. In 1886, Alfred Russel Wallace went on a lecture tour across the United States, starting in New York and going via Boston, Washington, Kansas, Iowa and Nebraska to California, lecturing on what he called "Darwinism" without any problems.
In his book Darwinism (1889), Wallace had used the term pure-Darwinism which proposed a "greater efficacy" for natural selection. George Romanes dubbed this view as "Wallaceism", noting that in contrast to Darwin, this position was advocating a "pure theory of natural selection to the exclusion of any supplementary theory." Taking influence from Darwin, Romanes was a proponent of both natural selection and the inheritance of acquired characteristics. The latter was denied by Wallace who was a strict selectionist. Romanes' definition of Darwinism conformed directly with Darwin's views and was contrasted with Wallace's definition of the term.
Contemporary usage
The term Darwinism is often used in the United States by promoters of creationism, notably by leading members of the intelligent design movement, as an epithet to attack evolution as though it were an ideology (an "ism") based on philosophical naturalism, atheism, or both. For example, in 1993, UC Berkeley law professor and author Phillip E. Johnson made this accusation of atheism with reference to Charles Hodge's 1874 book What Is Darwinism? However, unlike Johnson, Hodge confined the term to exclude those like American botanist Asa Gray who combined Christian faith with support for Darwin's natural selection theory, before answering the question posed in the book's title by concluding: "It is Atheism."
Creationists use pejoratively the term Darwinism to imply that the theory has been held as true only by Darwin and a core group of his followers, whom they cast as dogmatic and inflexible in their belief. In the 2008 documentary film Expelled: No Intelligence Allowed, which promotes intelligent design (ID), American writer and actor Ben Stein refers to scientists as Darwinists. Reviewing the film for Scientific American, John Rennie says "The term is a curious throwback, because in modern biology almost no one relies solely on Darwin's original ideas ... Yet the choice of terminology isn't random: Ben Stein wants you to stop thinking of evolution as an actual science supported by verifiable facts and logical arguments and to start thinking of it as a dogmatic, atheistic ideology akin to Marxism."
However, Darwinism is also used neutrally within the scientific community to distinguish the modern evolutionary synthesis, which is sometimes called "neo-Darwinism", from those first proposed by Darwin. Darwinism also is used neutrally by historians to differentiate his theory from other evolutionary theories current around the same period. For example, Darwinism may refer to Darwin's proposed mechanism of natural selection, in comparison to more recent mechanisms such as genetic drift and gene flow. It may also refer specifically to the role of Charles Darwin as opposed to others in the history of evolutionary thought—particularly contrasting Darwin's results with those of earlier theories such as Lamarckism or later ones such as the modern evolutionary synthesis.
In political discussions in the United States, the term is mostly used by its enemies. Biologist E. O. Wilson at Harvard University described the term as being "a rhetorical device to make evolution seem like a kind of faith, like 'Maoism [...] Scientists don't call it 'Darwinism'." In the United Kingdom, the term often retains its positive sense as a reference to natural selection, and for example British ethologist and evolutionary biologist Richard Dawkins wrote in his collection of essays A Devil's Chaplain, published in 2003, that as a scientist he is a Darwinist.
In his 1995 book Darwinian Fairytales, Australian philosopher David Stove used the term "Darwinism" in a different sense from the above examples. Describing himself as non-religious and as accepting the concept of natural selection as a well-established fact, Stove nonetheless attacked what he described as flawed concepts proposed by some "Ultra-Darwinists". Stove alleged that by using weak or false ad hoc reasoning, these Ultra-Darwinists used evolutionary concepts to offer explanations that were not valid: for example, Stove suggested that the sociobiological explanation of altruism as an evolutionary feature was presented in such a way that the argument was effectively immune to any criticism. English philosopher Simon Blackburn wrote a rejoinder to Stove, though a subsequent essay by Stove's protégé James Franklin suggested that Blackburn's response actually "confirms Stove's central thesis that Darwinism can 'explain' anything."
In more recent times, the Australian moral philosopher and professor Peter Singer, who serves as the Ira W. DeCamp Professor of Bioethics at Princeton University, has proposed the development of a "Darwinian left" based on the contemporary scientific understanding of biological anthropology, human evolution, and applied ethics in order to achieve the establishment of a more equal and cooperative human society in accordance with the sociobiological explanation of altruism.
Esoteric usage
In evolutionary aesthetics theory, there is evidence that perceptions of beauty are determined by natural selection and therefore Darwinian; that things, aspects of people and landscapes considered beautiful are typically found in situations likely to give enhanced survival of the perceiving human's genes.
See also
Darwin Awards
Evidence of common descent
History of evolutionary thought
Modern evolutionary synthesis
Neural Darwinism
Pangenesis—Charles Darwin's hypothetical mechanism for heredity
Social Darwinism
Speciation
Universal Darwinism
References
Sources
Further reading
Fiske, John. (1885). Darwinism, and Other Essays. Houghton Mifflin and Company.
Mayr, Ernst. (1985). The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Harvard University Press.
Romanes, John George. (1906). Darwin and After Darwin: An Exposition of the Darwinian Theory and a Discussion of Post-Darwinian Questions. Volume 2: Heredity and Utility. The Open Court Publishing Company.
Wallace, Alfred Russel. (1889). Darwinism: An Exposition of the Theory of Natural Selection, with Some of Its Applications. Macmillan and Company.
Simon, C. (2019). Taking Darwinism seriously. Animal Sentience, 3(23), 47.
External links
1860s neologisms
Biology theories
History of evolutionary biology
Charles Darwin | Darwinism | [
"Biology"
] | 2,423 | [
"Biology theories"
] |
8,429 | https://en.wikipedia.org/wiki/Density | Density (volumetric mass density or specific mass) is a substance's mass per unit of volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:
where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate this quantity is more specifically called specific weight.
For a pure substance the density has the same numerical value as its mass concentration.
Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium is the densest known element at standard conditions for temperature and pressure.
To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one relative to water means that the substance floats in water.
The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid, which causes it to rise relative to denser unheated material.
The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass.
Other conceptually comparable quantities or ratios include specific density, relative density (specific gravity), and specific weight.
History
Density, floating, and sinking
The understanding that different materials have different densities, and of a relationship between density, floating, and sinking must date to prehistoric times. Much later it was put in writing. Aristotle, for example, wrote:
Volume vs. density; volume of an irregular shape
In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (). As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment.
The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time.
Nevertheless, in 1586, Galileo Galilei, in one of his first experiments, made a possible reconstruction of how the experiment could have been performed with ancient Greek resources
Units
From the equation for density (), mass density has any unit that is mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to 1000 kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density.
The litre and tonne are not part of the SI, but are acceptable for use with it, leading to the following units:
kilogram per litre (kg/L)
gram per millilitre (g/mL)
tonne per cubic metre (t/m3)
Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3.
kilogram per cubic decimetre (kg/dm3)
gram per cubic centimetre (g/cm3)
1 g/cm3 = 1000 kg/m3
megagram (metric ton) per cubic metre (Mg/m3)
In US customary units density can be stated in:
Avoirdupois ounce per cubic inch (1 g/cm3 ≈ 0.578036672 oz/cu in)
Avoirdupois ounce per fluid ounce (1 g/cm3 ≈ 1.04317556 oz/US fl oz = 1.04317556 lb/US fl pint)
Avoirdupois pound per cubic inch (1 g/cm3 ≈ 0.036127292 lb/cu in)
pound per cubic foot (1 g/cm3 ≈ 62.427961 lb/cu ft)
pound per cubic yard (1 g/cm3 ≈ 1685.5549 lb/cu yd)
pound per US liquid gallon (1 g/cm3 ≈ 8.34540445 lb/US gal)
pound per US bushel (1 g/cm3 ≈ 77.6888513 lb/bu)
slug per cubic foot
Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion.
Knowing the volume of the unit cell of a crystalline material and its formula weight (in daltons), the density can be calculated. One dalton per cubic ångström is equal to a density of 1.660 539 066 60 g/cm3.
Measurement
A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question.
Homogeneous materials
The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object.
Heterogeneous materials
If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: , where is an elementary volume at position . The mass of the body then can be expressed as
Non-compact materials
In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules.
Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture.
The bulk volume of a material —inclusive of the void space fraction— is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions.
Mass divided by bulk volume determines bulk density. This is not the same thing as the material volumetric mass density.
To determine the material volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a variable void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling.
In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void.
In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand).
Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known.
Changes of density
In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures.
The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius.
In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is
where is the molar mass, is the pressure, is the universal gas constant, and is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature.
In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is
where is the density at a reference temperature, is the thermal expansion coefficient of the material at temperatures close to .
Density of solutions
The density of a solution is the sum of mass (massic) concentrations of the components of that solution.
Mass (massic) concentration of each given component in a solution sums to density of the solution,
Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes:
provided that there is no interaction between the components.
Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients:
List of densities
Various materials
Others
Water
Air
Molar volumes of liquid and solid phase of elements
See also
Densities of the elements (data page)
List of elements by density
Air density
Area density
Bulk density
Buoyancy
Charge density
Density current
Density prediction by the Girolami method
Dord
Energy density
Lighter than air
Linear density
Number density
Orthobaric density
Paper density
Specific weight
Spice (oceanography)
Standard temperature and pressure
Volumic quantity
References
External links
Video: Density Experiment with Oil and Alcohol
Video: Density Experiment with Whiskey and Water
Glass Density Calculation – Calculation of the density of glass at room temperature and of glass melts at 1000 – 1400°C
List of Elements of the Periodic Table – Sorted by Density
Calculation of saturated liquid densities for some components
Field density test
Water – Density and specific weight
Temperature dependence of the density of water – Conversions of density units
A delicious density experiment
Water density calculator Water density for a given salinity and temperature.
Liquid density calculator Select a liquid from the list and calculate density as a function of temperature.
Gas density calculator Calculate density of a gas for as a function of temperature and pressure.
Densities of various materials.
Determination of Density of Solid, instructions for performing classroom experiment. | Density | [
"Physics"
] | 2,961 | [
"Mechanical quantities",
"Physical quantities",
"Mass",
"Intensive quantities",
"Volume-specific quantities",
"Density",
"Mass density",
"Matter"
] |
8,439 | https://en.wikipedia.org/wiki/Diacritic | A diacritic (also diacritical mark, diacritical point, diacritical sign, or accent) is a glyph added to a letter or to a basic glyph. The term derives from the Ancient Greek (, "distinguishing"), from (, "to distinguish"). The word diacritic is a noun, though it is sometimes used in an attributive sense, whereas diacritical is only an adjective. Some diacritics, such as the acute , grave , and circumflex (all shown above an 'o'), are often called accents. Diacritics may appear above or below a letter or in some other position such as within the letter or between two letters.
The main use of diacritics in Latin script is to change the sound-values of the letters to which they are added. Historically, English has used the diaeresis diacritic to indicate the correct pronunciation of ambiguous words, such as "coöperate", without which the <oo> letter sequence could be misinterpreted to be pronounced . Other examples are the acute and grave accents, which can indicate that a vowel is to be pronounced differently than is normal in that position, for example not reduced to /ə/ or silent as in the case of the two uses of the letter e in the noun résumé (as opposed to the verb resume) and the help sometimes provided in the pronunciation of some words such as doggèd, learnèd, blessèd, and especially words pronounced differently than normal in poetry (for example movèd, breathèd).
Most other words with diacritics in English are borrowings from languages such as French to better preserve the spelling, such as the diaeresis on and , the acute from , the circumflex in the word , and the cedille in . All these diacritics, however, are frequently omitted in writing, and English is the only major modern European language that does not have diacritics in common usage.
In Latin-script alphabets in other languages, diacritics may distinguish between homonyms, such as the French ("there") versus ("the"), which are both pronounced . In Gaelic type, a dot over a consonant indicates lenition of the consonant in question. In other writing systems, diacritics may perform other functions. Vowel pointing systems, namely the Arabic harakat and the Hebrew niqqud systems, indicate vowels that are not conveyed by the basic alphabet. The Indic virama ( ् etc.) and the Arabic sukūn ( ) mark the absence of vowels. Cantillation marks indicate prosody. Other uses include the Early Cyrillic titlo stroke ( ◌҃ ) and the Hebrew gershayim ( ), which, respectively, mark abbreviations or acronyms, and Greek diacritical marks, which showed that letters of the alphabet were being used as numerals. In Vietnamese and the Hanyu Pinyin official romanization system for Mandarin in China, diacritics are used to mark the tones of the syllables in which the marked vowels occur.
In orthography and collation, a letter modified by a diacritic may be treated either as a new, distinct letter or as a letter–diacritic combination. This varies from language to language and may vary from case to case within a language.
In some cases, letters are used as "in-line diacritics", with the same function as ancillary glyphs, in that they modify the sound of the letter preceding them, as in the case of the "h" in the English pronunciation of "sh" and "th". Such letter combinations are sometimes even collated as a single distinct letter. For example, the spelling sch was traditionally often treated as a separate letter in German. Words with that spelling were listed after all other words spelled with s in card catalogs in the Vienna public libraries, for example (before digitization).
Types
Among the types of diacritic used in alphabets based on the Latin script are:
accents (so called because the acute, grave, and circumflex were originally used to indicate different types of pitch accents in the polytonic transcription of Greek)
– acute (); for example
– grave; for example
– circumflex; for example
– caron, wedge; for example
– double acute; for example
– double grave; for example
one dot
– an overdot is used in many orthographies and transcriptions; for example
– an underdot is also used in many orthographies and transcriptions; for example
– an interpunct is used in the Catalan (l·l)
– a dot above right is used in Pe̍h-ōe-jī
tittle, the superscript dot of the modern lowercase Latin and
two dots:
two overdots () are used for umlaut, diaeresis and others; (for example )
two underdots () are used in the International Phonetic Alphabet (IPA) and the ALA-LC romanization system
– triangular colon, used in the IPA to mark long vowels (the "dots" are triangular, not circular).
curves
– breve; for example
– inverted breve; for example
– sicilicus, a palaeographic diacritic similar to a caron or breve
– tilde; for example
– titlo
vertical stroke
– a subscript vertical stroke is used in IPA to mark syllabicity and in to mark a schwa
– a superscript vertical stroke is used in Pe̍h-ōe-jī
macron or horizontal line
– macron; for example
– underbar
overlays
– vertical bar through the character
– slash through the character; for example
– crossbar through the character
ring
– overring: for example
superscript curls
– apostrophe
– inverted apostrophe
– reversed apostrophe
– hook above ()
– horn (); for example
subscript curls
– undercomma; for example
– cedilla; for example
– hook, left or right, sometimes superscript
– ogonek; for example
double marks (over or under two base characters)
– double breve
– tie bar or top ligature
– double circumflex
– longum
– double tilde
double sub/superscript diacritics
– double cedilla
– double ogonek
– double diaeresis
– double ypogegrammeni
The tilde, dot, comma, titlo, apostrophe, bar, and colon are sometimes diacritical marks, but also have other uses.
Not all diacritics occur adjacent to the letter they modify. In the Wali language of Ghana, for example, an apostrophe indicates a change of vowel quality, but occurs at the beginning of the word, as in the dialects ’Bulengee and ’Dolimi. Because of vowel harmony, all vowels in a word are affected, so the scope of the diacritic is the entire word. In abugida scripts, like those used to write Hindi and Thai, diacritics indicate vowels, and may occur above, below, before, after, or around the consonant letter they modify.
The tittle (dot) on the letter or the letter , of the Latin alphabet originated as a diacritic to clearly distinguish from the minims (downstrokes) of adjacent letters. It first appeared in the 11th century in the sequence ii (as in ), then spread to i adjacent to m, n, u, and finally to all lowercase is. The , originally a variant of i, inherited the tittle. The shape of the diacritic developed from initially resembling today's acute accent to a long flourish by the 15th century. With the advent of Roman type it was reduced to the round dot we have today.
Several languages of eastern Europe use diacritics on both consonants and vowels, whereas in western Europe digraphs are more often used to change consonant sounds. Most languages in Europe use diacritics on vowels, aside from English where there are typically none (with some exceptions).
Diacritics specific to non-Latin alphabets
Arabic
(ئ ؤ إ أ and stand alone ء) : indicates a glottal stop.
(ــًــٍــٌـ) () symbols: Serve a grammatical role in Arabic. The sign ـً is most commonly written in combination with alif, e.g. .
(ــّـ) : Gemination (doubling) of consonants.
(ٱ) : Comes most commonly at the beginning of a word. Indicates a type of that is pronounced only when the letter is read at the beginning of the talk.
(آ) : A written replacement for a that is followed by an alif, i.e. (). Read as a glottal stop followed by a long , e.g. are written out respectively as . This writing rule does not apply when the alif that follows a is not a part of the stem of the word, e.g. is not written out as as the stem does not have an alif that follows its .
(ــٰـ) superscript (also "short" or "dagger alif": A replacement for an original alif that is dropped in the writing out of some rare words, e.g. is not written out with the original alif found in the word pronunciation, instead it is written out as .
(In Arabic: also called ):
(ــَـ) (a)
(ــِـ) (i)
(ــُـ) (u)
(ــْـ) (no vowel)
The or vowel points serve two purposes:
They serve as a phonetic guide. They indicate the presence of short vowels (, , or ) or their absence ().
At the last letter of a word, the vowel point reflects the inflection case or conjugation mood.
For nouns, The is for the nominative, for the accusative, and for the genitive.
For verbs, the is for the imperfective, for the perfective, and the is for verbs in the imperative or jussive moods.
Vowel points or should not be confused with consonant points or () – one, two or three dots written above or below a consonant to distinguish between letters of the same or similar form.
Greek
These diacritics are used in addition to the acute, grave, and circumflex accents and the diaeresis:
– iota subscript ()
– rough breathing (, ): aspiration
– smooth (or soft) breathing (, ): lack of aspiration
Hebrew
Niqqud
– Dagesh
– Mappiq
– Rafe
– Shin dot (at top right corner)
– Sin dot (at top left corner)
– Shva
– Kubutz
– Holam
– Kamatz
– Patakh
– Segol
– Tzeire
– Hiriq
(Cantillation marks do not generally render correctly; refer to Hebrew cantillation#Names and shapes of the ta'amim for a complete table together with instructions for how to maximize the possibility of viewing them in a web browser.)
Other
– Geresh
– Gershayim
Korean
The diacritics 〮 and 〯 , known as Bangjeom (), were used to mark pitch accents in Hangul for Middle Korean. They were written to the left of a syllable in vertical writing and above a syllable in horizontal writing.
Sanskrit and Indic
Syriac
A dot above and a dot below a letter represent , transliterated as a or ă,
Two diagonally-placed dots above a letter represent , transliterated as ā or â or å,
Two horizontally-placed dots below a letter represent , transliterated as e or ĕ; often pronounced and transliterated as i in the East Syriac dialect,
Two diagonally-placed dots below a letter represent , transliterated as ē,
A dot underneath the Beth represent a soft sound, transliterated as v
A tilde (~) placed under Gamel represent a sound, transliterated as j
The letter Waw with a dot below it represents , transliterated as ū or u,
The letter Waw with a dot above it represents , transliterated as ō or o,
The letter Yōḏ with a dot beneath it represents , transliterated as ī or i,
A tilde (~) under Kaph represent a sound, transliterated as ch or č,
A semicircle under Peh represents an sound, transliterated as f or ph.
In addition to the above vowel marks, transliteration of Syriac sometimes includes ə, e̊ or superscript e (or often nothing at all) to represent an original Aramaic schwa that became lost later on at some point in the development of Syriac. Some transliteration schemes find its inclusion necessary for showing spirantization or for historical reasons.
Non-alphabetic scripts
Some non-alphabetic scripts also employ symbols that function essentially as diacritics.
Non-pure abjads (such as Hebrew and Arabic script) and abugidas use diacritics for denoting vowels. Hebrew and Arabic also indicate consonant doubling and change with diacritics; Hebrew and Devanagari use them for foreign sounds. Devanagari and related abugidas also use a diacritical mark called a virama to mark the absence of a vowel. In addition, Devanagari uses the moon-dot chandrabindu ( ँ ) for vowel nasalization.
Unified Canadian Aboriginal Syllabics use several types of diacritics, including the diacritics with alphabetic properties known as Medials and Finals. Although long vowels originally were indicated with a negative line through the Syllabic glyphs, making the glyph appear broken, in the modern forms, a dot above is used to indicate vowel length. In some of the styles, a ring above indicates a long vowel with a [j] off-glide. Another diacritic, the "inner ring" is placed at the glyph's head to modify [p] to [f] and [t] to [θ]. Medials such as the "w-dot" placed next to the Syllabics glyph indicates a [w] being placed between the syllable onset consonant and the nucleus vowel. Finals indicate the syllable coda consonant; some of the syllable coda consonants in word medial positions, such as with the "h-tick", indicate the fortification of the consonant in the syllable following it.
The Japanese hiragana and katakana syllabaries use the dakuten (◌゛) and handakuten (◌゜) (in Japanese: 濁点 and 半濁点) symbols, also known as nigori (濁 "muddying") or ten-ten (点々 "dot dot") and maru (丸 "circle"), to indicate voiced consonants or other phonetic changes.
Emoticons are commonly created with diacritic symbols, especially Japanese emoticons on popular imageboards.
Alphabetization or collation
Different languages use different rules to put diacritic characters in alphabetical order. For example, French and Portuguese treat letters with diacritical marks the same as the underlying letter for purposes of ordering and dictionaries. The Scandinavian languages and the Finnish language, by contrast, treat the characters with diacritics , , and as distinct letters of the alphabet, and sort them after . Usually (a-umlaut) and (o-umlaut) [used in Swedish and Finnish] are sorted as equivalent to (ash) and (o-slash) [used in Danish and Norwegian]. Also, aa, when used as an alternative spelling to , is sorted as such. Other letters modified by diacritics are treated as variants of the underlying letter, with the exception that is frequently sorted as .
Languages that treat accented letters as variants of the underlying letter usually alphabetize words with such symbols immediately after similar unmarked words. For instance, in German where two words differ only by an umlaut, the word without it is sorted first in German dictionaries (e.g. schon and then schön, or fallen and then fällen). However, when names are concerned (e.g. in phone books or in author catalogues in libraries), umlauts are often treated as combinations of the vowel with a suffixed ; Austrian phone books now treat characters with umlauts as separate letters (immediately following the underlying vowel).
In Spanish, the grapheme is considered a distinct letter, different from and collated between and , as it denotes a different sound from that of a plain . But the accented vowels , , , , are not separated from the unaccented vowels , , , , , as the acute accent in Spanish only modifies stress within the word or denotes a distinction between homonyms, and does not modify the sound of a letter.
For a comprehensive list of the collating orders in various languages, see Collating sequence.
Generation with computers
Modern computer technology was developed mostly in countries that speak Western European languages (particularly English), and many early binary encodings were developed with a bias favoring Englisha language written without diacritical marks. With computer memory and computer storage at premium, early character sets were limited to the Latin alphabet, the ten digits and a few punctuation marks and conventional symbols. The American Standard Code for Information Interchange (ASCII), first published in 1963, encoded just 95 printable characters. It included just four free-standing diacriticsacute, grave, circumflex and tildewhich were to be used by backspacing and overprinting the base letter. The ISO/IEC 646 standard (1967) defined national variations that replace some American graphemes with precomposed characters (such as , and ), according to languagebut remained limited to 95 printable characters.
Unicode was conceived to solve this problem by assigning every known character its own code; if this code is known, most modern computer systems provide a method to input it. For historical reasons, almost all the letter-with-accent combinations used in European languages were given unique code points and these are called precomposed characters. For other languages, it is usually necessary to use a combining character diacritic together with the desired base letter. Unfortunately, even as of 2024, many applications and web browsers remain unable to operate the combining diacritic concept properly.
Depending on the keyboard layout and keyboard mapping, it is more or less easy to enter letters with diacritics on computers and typewriters. Keyboards used in countries where letters with diacritics are the norm, have keys engraved with the relevant symbols. In other cases, such as when the US international or UK extended mappings are used, the accented letter is created by first pressing the key with the diacritic mark, followed by the letter to place it on. This method is known as the dead key technique, as it produces no output of its own but modifies the output of the key pressed after it.
Languages with letters containing diacritics
The following languages have letters with diacritics that are orthographically distinct from those without diacritics.
Latin script
Baltic
Latvian has the following letters: , , , , , , , , , ,
Lithuanian. In general usage, where letters appear with the caron (, and ), they are considered as separate letters from , or and collated separately; letters with the ogonek (, , and ), the macron () and the overdot () are considered as separate letters as well, but not given a unique collation order.
Celtic
Welsh uses the circumflex, diaeresis, acute, and grave accents on its seven vowels , , , , , , (hence the composites , , , , , , , , , , , , , , , , , , , , , , , , , , , ). However all except the circumflex (which is used as a macron) are fairly rare.
Following spelling reforms since the 1970s, Scottish Gaelic uses graves only, which can be used on any vowel (, , , , ). Formerly acute accents could be used on , and , which were used to indicate a specific vowel quality. With the elimination of these accents, the new orthography relies on the reader having prior knowledge of pronunciation of a given word.
Manx uses the cedilla diacritic combined with h to give the digraph (pronounced ) to mark the distinction between it and the digraph (pronounced or ). Other diacritics used in Manx included the circumflex and diaeresis, as in , , , etc. to mark the distinction between two similarly spelled words but with slightly differing pronunciation.
Irish uses only acute accents to mark long vowels, following the 1948 spelling reform. Lenition is indicated using an overdot in Gaelic type (,,, , , , , ); in Roman type, a suffixed is used. Thus, is equivalent to .
Breton does not have a single orthography (spelling system), but uses diacritics for a number of purposes. The diaeresis is used to mark that two vowels are pronounced separately and not as a diphthong/digraph. The circumflex is used to mark long vowels, but usually only when the vowel length is not predictable by phonology. Nasalization of vowels may be marked with a tilde, or following the vowel with the letter . The plural suffix -où is used as a unified spelling to represent a suffix with a number of pronunciations in different dialects, and to distinguish this suffix from the digraph which is pronounced as . An apostrophe is used to distinguish , pronounced as the digraph is used in other Celtic languages, from the French-influenced digraph ch, pronounced .
Finno-Ugric
Estonian has a distinct letter , which contains a tilde. Estonian vowels with double-dot diacritics , , are similar to German, but these are also distinct letters, unlike German umlauted letters. All four have their own place in the alphabet, between and . Carons in or appear only in foreign proper names and loanwords. Also these are distinct letters, placed in the alphabet between s and t.
Finnish uses double-dotted vowels ( and ). As in Swedish and Estonian, these are regarded as individual letters, rather than 'vowel + diacritic' combinations (as happens in German). It also uses the characters , and in foreign names and loanwords. In the Finnish and Swedish alphabets, , and collate as separate letters after , the others as variants of their base letter.
Hungarian uses the double-dot, the acute and double acute diacritics (the last is unique to Hungarian): (, ), (, , , , ) and (, ). The acute accent indicates the long form of a vowel (in case of /, /, /) while the double acute performs the same function for and . The acute accent can also indicate a different sound (more open, as in case of /, /). Both long and short forms of the vowels are listed separately in the Hungarian alphabet, but members of the pairs /, /, /, /, /, / and / are collated in dictionaries as the same letter.
Livonian has the following letters: , , , , , , , , , , , , , , , , , .
Germanic
German uses the two-dots diacritic (): letters , , , used to indicate the fronting of back vowels (see umlaut (linguistics)).
Dutch uses acute, circumflex, grave and two-dots diacritics with most vowels and cedilla with c, as in French. This results in , , , , , , , , , , , , , , , and . This is mostly on words (and names) originating from French (like crème, café, gêne, façade). The acute accent is also used to stress the vowel (like één). The two-dots diacritic is used as a linguistic diaeresis (a vowel hiatus) that splits the two vowels, e.g., reële, reünie, coördinatie), rather than to indicate a linguistic as used in German.
Afrikaans uses 16 additional vowel forms, both uppercase and lowercase: , , , , , , , , , , , , , , , .
Faroese uses acutes and some additional letters. All are considered separate letters and have their own place in the alphabet: , , , , and .
Icelandic uses acutes and other additional letters. All are considered separate letters, and have their own place in the alphabet: {{angbrZá}}, , , , , and .
Danish and Norwegian use additional characters like the o-slash and the a-overring . These letters come after and in the order , . Historically, the has developed from a ligature by writing a small superscript over a lowercase ; if an character is unavailable, some Scandinavian languages allow the substitution of a doubled a, thus . The Scandinavian languages collate these letters after , but have different national collation standards.
Swedish uses a-diaeresis () and o-diaeresis () in the place of () and slashed o () in addition to the a-overring (). Historically, the two-dots diacritic for the Swedish letters and developed from a small Gothic written above the letters. These letters are collated after , in the order , , .
Romance
In Asturian, Galician and Spanish, the character is a letter and collated between n and o.
Asturian uses an underdot: (lower case, ), and (lower case )
Catalan uses the acute accent , , , , the grave accent , , , the diaeresis , , the cedilla , and the interpunct .
In Valencian, the circumflex , , , , may also be used.
Corsican uses the following in its alphabet: /, /, /, /, /.
French uses four diacritics, appearing on vowels (circumflex, acute, grave, diaeresis) and the cedilla appearing in .
Italian uses two diacritics, appearing on vowels (acute, grave)
Leonese: could use or .
Portuguese uses a tilde with the vowels and and a cedilla with c.
Romanian uses a breve on the letter a () to indicate the sound schwa , as well as a circumflex over the letters a () and i () for the sound . Romanian also writes a comma below the letters s () and t () to represent the sounds and , respectively. These characters are collated after their non-diacritic equivalent.
Spanish uses acute accents (, , , , ) to indicate stress falling on a different syllable than the one it would fall on based on default rules, and to distinguish certain one-syllable homonyms (e.g. (masculine singular definite article) and [he]). The acute accent is also used to break up sequences of vowels that would normally be pronouced as a diphthong into two syllables, as in the word . Diaeresis is used on u only, to distinguish the combinations from , e.g. . The tilde on is not considered a diacritic as is considered a distinct letter from , not a mutated form of it.
Slavic
Gaj's Latin alphabet, used in Croatian and latinized Serbian, has the symbols , , , and , which are considered separate letters and are listed as such in dictionaries and other contexts in which words are listed according to alphabetical order. It also has one digraph including a diacritic, dž, which is also alphabetized independently, and follows and precedes in the alphabetical order.
The Czech alphabet uses the acute (á é í ó ú ý), caron (č ď ě ň ř š ť ž), and for one letter (ů) the ring. (In ď and ť the caron is modified to look rather like an apostrophe.) Letter with caron are considered separate letters, whereas vowels are considered only as longer variants of the unaccented letters. Acute does not affect alphabetical order, letters with caron are ordered after original counterparts.
Polish has the following letters: ą ć ę ł ń ó ś ź ż. These are considered to be separate letters: each of them is placed in the alphabet immediately after its Latin counterpart (e.g. between and ), and are placed after in that order.
The Serbian Cyrillic alphabet has no diacritics, instead it has a grapheme (glyph) for every letter of its Latin counterpart (including Latin letters with diacritics and the digraphs dž, lj and nj).
The Slovak alphabet uses the acute (á é í ó ú ý ĺ ŕ), caron (č ď ľ ň š ť ž dž), umlaut (ä) and circumflex accent (ô). All of those are considered separate letters and are placed directly after the original counterpart in the alphabet.
The basic Slovenian alphabet has the symbols , , and , which are considered separate letters and are listed as such in dictionaries and other contexts in which words are listed according to alphabetical order. Letters with a caron are placed right after the letters as written without the diacritic. The letter ('d with bar') may be used in non-transliterated foreign words, particularly names, and is placed after and before .
Turkic
Azerbaijani includes the distinct Turkish alphabet letters Ç, Ğ, I, İ, Ö, Ş and Ü.
Crimean Tatar includes the distinct Turkish alphabet letters Ç, Ğ, I, İ, Ö, Ş and Ü. Unlike Turkish, Crimean Tatar also has the letter Ñ.
Gagauz includes the distinct Turkish alphabet letters Ç, Ğ, I, İ, Ö and Ü. Unlike Turkish, Gagauz also has the letters Ä, Ê Ș and Ț. Ș and Ț are derived from the Romanian alphabet for the same sounds. Sometime the Turkish Ş may be used instead of Ș.
Turkish uses a with a breve (), two letters with two dots ( and , representing two rounded front vowels), two letters with a cedilla ( and , representing the affricate and the fricative ), and also possesses a dotted capital (and a dotless lowercase representing a high unrounded back vowel). In Turkish each of these are separate letters, rather than versions of other letters, where dotted capital and lower case are the same letter, as are dotless capital and lowercase . Typographically, and are sometimes rendered with an underdot, as in . The new Azerbaijani, Crimean Tatar, and Gagauz alphabets are based on the Turkish alphabet and its same diacriticized letters, with some additions.
Turkmen includes the distinct Turkish alphabet letters Ç, Ö, Ş and Ü. In addition, Turkmen uses A with diaeresis (Ä) to represent , N with caron () to represent the velar nasal , Y with acute () to represent the palatal approximant , and Z with caron () to represent .
Other
Albanian has two special letters Ç and Ë upper and lowercase. They are placed next to the most similar letters in the alphabet, c and e correspondingly.
Esperanto has the symbols ŭ, ĉ, ĝ, ĥ, ĵ and ŝ, which are included in the alphabet, and considered separate letters.
Filipino also has the character ñ as a letter and is collated between n and o.
Modern Greenlandic does not use any diacritics, although ø and å are used to spell loanwords, especially from Danish and English. From 1851 until 1973, Greenlandic was written in an alphabet invented by Samuel Kleinschmidt, where long vowels and geminate consonants were indicated by diacritics on vowels (in the case of consonant gemination, the diacritics were placed on the vowel preceding the affected consonant). For example, the name Kalaallit Nunaat was spelled Kalâdlit Nunât. This scheme uses the circumflex (◌̂) to indicate a long vowel (e.g. ; modern: ), an acute accent (◌́) to indicate gemination of the following consonant: (i.e. ; modern: ) and, finally, a tilde (◌̃) or a grave accent (◌̀), depending on the author, indicates vowel length and gemination of the following consonant (e.g. ; modern: ). , used only before , are now written in Greenlandic.
Hawaiian uses the kahakō (macron) over vowels, although there is some disagreement over considering them as individual letters. The kahakō over a vowel can completely change the meaning of a word that is spelled the same but without the kahakō.
Kurdish uses the symbols Ç, Ê, Î, Ş and Û with other 26 standard Latin alphabet symbols.
Lakota alphabet uses the caron for the letters č, ȟ, ǧ, š, and ž. It also uses the acute accent for stressed vowels á, é, í, ó, ú, áŋ, íŋ, úŋ.
Malay uses some diacritics such as á, ā, ç, í, ñ, ó, š, ú. Uses of diacritics was continued until late 19th century except ā and ē.
Maltese uses a C, G, and Z with a dot over them (Ċ, Ġ, Ż), and also has an H with an extra horizontal bar. For uppercase H, the extra bar is written slightly above the usual bar. For lowercase H, the extra bar is written crossing the vertical, like a t, and not touching the lower part (Ħ, ħ). The above characters are considered separate letters. The letter 'c' without a dot has fallen out of use due to redundancy. 'Ċ' is pronounced like the English 'ch' and 'k' is used as a hard c as in 'cat'. 'Ż' is pronounced just like the English 'Z' as in 'Zebra', while 'Z' is used to make the sound of 'ts' in English (like 'tsunami' or 'maths'). 'Ġ' is used as a soft 'G' like in 'geometry', while the 'G' sounds like a hard 'G' like in 'log'. The digraph 'għ' (called għajn after the Arabic letter name ʻayn for غ) is considered separate, and sometimes ordered after 'g', whilst in other volumes it is placed between 'n' and 'o' (the Latin letter 'o' originally evolved from the shape of Phoenician ʻayin, which was traditionally collated after Phoenician nūn).
The romanization of Syriac uses the altered letters of. Ā, Č, Ḏ, Ē, Ë, Ġ, Ḥ, Ō, Š, Ṣ, Ṭ, Ū, Ž alongside the 26 standard Latin alphabet symbols.
Vietnamese uses the horn diacritic for the letters ơ and ư; the circumflex for the letters â, ê, and ô; the breve for the letter ă; and a bar through the letter đ. Separately, it also has á, à, ả, ã and ạ, the five tones used for vowels besides the flat tone 'a'.
Cyrillic letters
Belarusian and Uzbek Cyrillic have a letter .
Belarusian, Bulgarian, Russian and Ukrainian have the letter .
Belarusian and Russian have the letter . In Russian, this letter is usually replaced by , although it has a different pronunciation. The use of instead of does not affect the pronunciation. Ё is always used in children's books and in dictionaries. A minimal pair is все (vs'e, "everybody" pl.) and всё (vs'o, "everything" n. sg.). In Belarusian the replacement by is a mistake; in Russian, it is permissible to use either or for but the former is more common in everyday writing (as opposed to instructional or juvenile writing).
The Cyrillic Ukrainian alphabet has the letters , and . Ukrainian Latynka has many more.
Macedonian has the letters and .
In Bulgarian and Macedonian the possessive pronoun ѝ (ì, "her") is spelled with a grave accent in order to distinguish it from the conjunction и (i, "and").
The acute accent above any vowel in Cyrillic alphabets is used in dictionaries, books for children and foreign learners to indicate the word stress, it also can be used for disambiguation of similarly spelled words with different lexical stresses.
Diacritics that do not produce new letters
English
English is one of the few European languages that does not have many words that contain diacritical marks. Instead, digraphs are the main way the Modern English alphabet adapts the Latin to its phonemes. Exceptions are unassimilated foreign loanwords, including borrowings from French (and, increasingly, Spanish, like jalapeño and piñata); however, the diacritic is also sometimes omitted from such words. Loanwords that frequently appear with the diacritic in English include café, résumé or resumé (a usage that helps distinguish it from the verb resume), soufflé, and naïveté (see English terms with diacritical marks). In older practice (and even among some orthographically conservative modern writers), one may see examples such as élite, mêlée and rôle.
English speakers and writers once used the diaeresis more often than now in words such as coöperation (from Fr. coopération), zoölogy (from Grk. zoologia), and seeër (now more commonly see-er or simply seer) as a way of indicating that adjacent vowels belonged to separate syllables, but this practice has become far less common. The New Yorker magazine is a major publication that continues to use the diaeresis in place of a hyphen for clarity and economy of space.
A few English words, often when used out of context, especially in isolation, can only be distinguished from other words of the same spelling by using a diacritic or modified letter. These include exposé, lamé, maté, öre, øre, résumé and rosé. In a few words, diacritics that did not exist in the original have been added for disambiguation, as in maté (from Sp. and Port. mate), saké (the standard Romanization of the Japanese has no accent mark), and Malé (from Dhivehi މާލެ), to clearly distinguish them from the English words mate, sake, and male.
The acute and grave accents are occasionally used in poetry and lyrics: the acute to indicate stress overtly where it might be ambiguous (rébel vs. rebél) or nonstandard for metrical reasons (caléndar), the grave to indicate that an ordinarily silent or elided syllable is pronounced (warnèd, parlìament).
In certain personal names such as Renée and Zoë, often two spellings exist, and the person's own preference will be known only to those close to them. Even when the name of a person is spelled with a diacritic, like Charlotte Brontë, this may be dropped in English-language articles, and even in official documents such as passports, due either to carelessness, the typist not knowing how to enter letters with diacritical marks, or technical reasons (California, for example, does not allow names with diacritics, as the computer system cannot process such characters). They also appear in some worldwide company names and/or trademarks, such as Nestlé and Citroën.
Other languages
The following languages have letter-diacritic combinations that are not considered independent letters.
Afrikaans uses a diaeresis to mark vowels that are pronounced separately and not as one would expect where they occur together, for example voel (to feel) as opposed to voël (bird). The circumflex is used in ê, î, ô and û generally to indicate long close-mid, as opposed to open-mid vowels, for example in the words wêreld (world) and môre (morning, tomorrow). The acute accent is used to add emphasis in the same way as underlining or writing in bold or italics in English, for example Dit is jóú boek (It is your book). The grave accent is used to distinguish between words that are different only in placement of the stress, for example appel (apple) and appèl (appeal) and in a few cases where it makes no difference to the pronunciation but distinguishes between homophones. The two most usual cases of the latter are in the sayings òf... òf (either... or) and nòg... nòg (neither... nor) to distinguish them from of (or) and nog (again, still).
Aymara uses a diacritical horn over p, q, t, k, ch.
Catalan has the following composite characters: à, ç, é, è, í, ï, ó, ò, ú, ü, l·l. The acute and the grave indicate stress and vowel height, the cedilla marks the result of a historical palatalization, the diaeresis indicates either a hiatus, or that the letter u is pronounced when the graphemes gü, qü are followed by e or i, the interpunct (·) distinguishes the different values of .
Some orthographies of Cornish such as Kernowek Standard and Unified Cornish use diacritics, while others such as Kernewek Kemmyn and the Standard Written Form do not (or only use them optionally in teaching materials).
Dutch uses the diaeresis. For example, in ruïne it means that the u and the i are separately pronounced in their usual way, and not in the way that the combination ui is normally pronounced. Thus it works as a separation sign and not as an indication for an alternative version of the i. Diacritics can be used for emphasis (érg koud for very cold) or for disambiguation between a number of words that are spelled the same when context does not indicate the correct meaning (één appel = one apple, een appel = an apple; vóórkomen = to occur, voorkómen = to prevent). Grave and acute accents are used on a very small number of words, mostly loanwords. The ç also appears in some loanwords.
Faroese. Non-Faroese accented letters are not added to the Faroese alphabet. These include é, ö, ü, å and recently also letters like š, ł, and ć.
Filipino has the following composite characters: á, à, â, é, è, ê, í, ì, î, ó, ò, ô, ú, ù, û. Everyday use of diacritics for Filipino is, however, uncommon, and meant only to distinguish between homonyms between a word with the usual penultimate stress and one with a different stress placement. This aids both comprehension and pronunciation if both are relatively adjacent in a text, or if a word is itself ambiguous in meaning. The letter ñ ("eñe") is not a n with a diacritic, but rather collated as a separate letter, one of eight borrowed from Spanish. Diacritics appear in Spanish loanwords and names observing Spanish orthography rules.
Finnish. Carons in š and ž appear only in foreign proper names and loanwords, but may be substituted with sh or zh if and only if it is technically impossible to produce accented letters in the medium. Contrary to Estonian, š and ž are not considered distinct letters in Finnish.
French uses five diacritics. The grave (accent grave) marks the sound when over an e, as in père ("father") or is used to distinguish words that are otherwise homographs such as a/à ("has"/"to") or ou/où ("or"/"where"). The acute (accent aigu) is only used in "é", modifying the "e" to make the sound , as in étoile ("star"). The circumflex (accent circonflexe) generally denotes that an S once followed the vowel in Old French or Latin, as in fête ("party"), the Old French being feste and the Latin being festum. Whether the circumflex modifies the vowel's pronunciation depends on the dialect and the vowel. The cedilla (cédille) indicates that a normally hard "c" (before the vowels "a", "o", and "u") is to be pronounced , as in ça ("that"). The diaeresis diacritic () indicates that two adjacent vowels that would normally be pronounced as one are to be pronounced separately, as in Noël ("Christmas").
Galician vowels can bear an acute (á, é, í, ó, ú) to indicate stress or difference between two otherwise same written words (é, 'is' vs. e, 'and'), but the diaeresis is only used with ï and ü to show two separate vowel sounds in pronunciation. Only in foreign words may Galician use other diacritics such as ç (common during the Middle Ages), ê, or à.
German uses the three umlauted characters ä, ö and ü. These diacritics indicate vowel changes. For instance, the word Ofen "oven" has the plural Öfen . The mark originated as a superscript e; a handwritten blackletter e resembles two parallel vertical lines, like a diaeresis. Due to this history, "ä", "ö" and "ü" can be written as "ae", "oe" and "ue" respectively, if the umlaut letters are not available.
Hebrew has many various diacritic marks known as niqqud that are used above and below script to represent vowels. These must be distinguished from cantillation, which are keys to pronunciation and syntax.
The International Phonetic Alphabet uses diacritic symbols and characters to indicate phonetic features or secondary articulations.
Irish uses the acute to indicate that a vowel is long: á, é, í, ó, ú. It is known as síneadh fada "long sign" or simply fada "long" in Irish. In the older Gaelic type, overdots are used to indicate lenition of a consonant: ḃ, ċ, ḋ, ḟ, ġ, ṁ, ṗ, ṡ, ṫ.
Italian mainly has the acute and the grave (à, è/é, ì, ò/ó, ù), typically to indicate a stressed syllable that would not be stressed under the normal rules of pronunciation but sometimes also to distinguish between words that are otherwise spelled the same way (e.g. "e", and; "è", is). Despite its rare use, Italian orthography allows the circumflex (î) too, in two cases: it can be found in old literary context (roughly up to 19th century) to signal a syncope (fêro→fecero, they did), or in modern Italian to signal the contraction of ″-ii″ due to the plural ending -i whereas the root ends with another -i; e.g., s. demonio, p. demonii→demonî; in this case the circumflex also signals that the word intended is not demoni, plural of "demone" by shifting the accent (demònî, "devils"; dèmoni, "demons").
Lithuanian uses the acute, grave and tilde in dictionaries to indicate stress types in the language's pitch accent system.
Maltese also uses the grave on its vowels to indicate stress at the end of a word with two syllables or more:– lowercase letters: à, è, ì, ò, ù; capital letters: À, È, Ì, Ò, Ù
Māori makes use of macrons to mark long vowels.
Occitan has the following composite characters: á, à, ç, é, è, í, ï, ó, ò, ú, ü, n·h, s·h. The acute and the grave indicate stress and vowel height, the cedilla marks the result of a historical palatalization, the diaeresis indicates either a hiatus, or that the letter u is pronounced when the graphemes gü, qü are followed by e or i, and the interpunct (·) distinguishes the different values of nh/n·h and sh/s·h (i.e., that the letters are supposed to be pronounced separately, not combined into "ny" and "sh").
Portuguese has the following composite characters: à, á, â, ã, ç, é, ê, í, ó, ô, õ, ú. The acute and the circumflex indicate stress and vowel height, the grave indicates crasis, the tilde represents nasalization, and the cedilla marks the result of a historical lenition.
Acutes are also used in Slavic language dictionaries and textbooks to indicate lexical stress, placed over the vowel of the stressed syllable. This can also serve to disambiguate meaning (e.g., in Russian писа́ть (pisáť) means "to write", but пи́сать (písať) means "to piss"), or "бо́льшая часть" (the biggest part) vs "больша́я часть" (the big part).
Spanish uses the acute and the diaeresis. The acute is used on a vowel in a stressed syllable in words with irregular stress patterns. It can also be used to "break up" a diphthong as in tío (pronounced , rather than as it would be without the accent). Moreover, the acute can be used to distinguish words that otherwise are spelled alike, such as si ("if") and sí ("yes"), and also to distinguish interrogative and exclamatory pronouns from homophones with a different grammatical function, such as donde/¿dónde? ("where"/"where?") or como/¿cómo? ("as"/"how?"). The acute may also be used to avoid typographical ambiguity, as in 1 ó 2 ("1 or 2"; without the acute this might be interpreted as "1 0 2". The diaeresis is used only over u (ü) for it to be pronounced in the combinations gue and gui, where u is normally silent, for example ambigüedad. In poetry, the diaeresis may be used on i and u as a way to force a hiatus. As foreshadowed above, in nasal ñ the tilde (squiggle) is not considered a diacritic sign at all, but a composite part of a distinct glyph, with its own chapter in the dictionary: a glyph that denotes the 15th letter of the Spanish alphabet.
Swedish uses the acute to show non-standard stress, for example in (café) and (résumé). This occasionally helps resolve ambiguities, such as ide (hibernation) versus idé (idea). In these words, the acute is not optional. Some proper names use non-standard diacritics, such as Carolina Klüft and Staël von Holstein. For foreign loanwords the original accents are strongly recommended, unless the word has been infused into the language, in which case they are optional. Hence crème fraîche but ampere. Swedish also has the letters å, ä, and ö, but these are considered distinct letters, not a and o with diacritics.
Tamil does not have any diacritics in itself, but uses the Arabic numerals 2, 3 and 4 as diacritics to represent aspirated, voiced, and voiced-aspirated consonants when Tamil script is used to write long passages in Sanskrit.
Thai has its own system of diacritics derived from Indian numerals, which denote different tones.
Vietnamese uses the acute (dấu sắc), the grave (dấu huyền), the tilde (dấu ngã), the underdot (dấu nặng) and the hook above (dấu hỏi) on vowels as tone indicators.
Welsh uses the circumflex, diaeresis, acute, and grave on its seven vowels a, e, i, o, u, w, y. The most common is the circumflex (which it calls to bach, meaning "little roof", or acen grom "crooked accent", or hirnod "long sign") to denote a long vowel, usually to disambiguate it from a similar word with a short vowel or a semivowel. The rarer grave accent has the opposite effect, shortening vowel sounds that would usually be pronounced long. The acute accent and diaeresis are also occasionally used, to denote stress and vowel separation respectively. The w-circumflex and the y-circumflex are among the most commonly accented characters in Welsh, but unusual in languages generally, and were until recently very hard to obtain in word-processed and HTML documents.
Transliteration
Several languages that are not written with the Roman alphabet are transliterated, or romanized, using diacritics. Examples:
Arabic has several romanisations, depending on the type of the application, region, intended audience, country, etc. many of them extensively use diacritics, e.g., some methods use an underdot for rendering emphatic consonants (ṣ, ṭ, ḍ, ẓ, ḥ). The macron is often used to render long vowels. š is often used for , ġ for .
Chinese has several romanizations that use the umlaut, but only on u (ü). In Hanyu Pinyin, the four tones of Mandarin Chinese are denoted by the macron (first tone), acute (second tone), caron (third tone) and grave (fourth tone) diacritics. Example: ā, á, ǎ, à.
Romanized Japanese (Rōmaji) occasionally uses macrons to mark long vowels. The Hepburn romanization system uses macrons to mark long vowels, and the Kunrei-shiki and Nihon-shiki systems use a circumflex.
Sanskrit, as well as many of its descendants, like Hindi and Bengali, uses a lossless romanization system, IAST. This includes several letters with diacritical markings, such as the macron (ā, ī, ū), over- and underdots (ṛ, ḥ, ṃ, ṇ, ṣ, ṭ, ḍ) as well as a few others (ś, ñ).
Limits
Orthographic
Possibly the greatest number of combining diacritics required to compose a valid character in any Unicode language is 8, for the "well-known grapheme cluster in Tibetan and Ranjana scripts" or .
It consists of
An example of rendering, may be broken depending on browser:
Unorthographic/ornamental
Some users have explored the limits of rendering in web browsers and other software by "decorating" words with excessive nonsensical diacritics per character to produce so-called Zalgo text.
List of diacritics in Unicode
Diacritics for Latin script in Unicode:
See also
Latin-script alphabets
Alt code
:Category:Letters with diacritics
Collating sequence
Combining character
Compose key
English terms with diacritical marks
Heavy metal umlaut
ISO/IEC 8859 8-bit extended-Latin-alphabet European character encodings
Latin alphabet
List of Latin letters
List of precomposed Latin characters in Unicode
List of U.S. cities with diacritics
Romanization
Notes
References
External links
Context of Diacritics A research project
Diacritics Project
Unicode
Orthographic diacritics and multilingual computing, by J. C. Wells
Notes on the use of the diacritics, by Markus Lång
Entering International Characters (in Linux, KDE)
Standard Character Set for Macintosh PDF at Adobe
Orthography
Punctuation
Typography | Diacritic | [
"Mathematics"
] | 11,783 | [
"Symbols",
"Diacritics"
] |
8,449 | https://en.wikipedia.org/wiki/Developmental%20biology | Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism.
Perspectives
The main processes involved in the embryonic development of animals are: tissue patterning (via regional specification and patterned cell differentiation); tissue growth; and tissue morphogenesis.
Regional specification refers to the processes that create the spatial patterns in a ball or sheet of initially similar cells. This generally involves the action of cytoplasmic determinants, located within parts of the fertilized egg, and of inductive signals emitted from signaling centers in the embryo. The early stages of regional specification do not generate functional differentiated cells, but cell populations committed to developing to a specific region or part of the organism. These are defined by the expression of specific combinations of transcription factors.
Cell differentiation relates specifically to the formation of functional cell types such as nerve, muscle, secretory epithelia, etc. Differentiated cells contain large amounts of specific proteins associated with cell function.
Morphogenesis relates to the formation of a three-dimensional shape. It mainly involves the orchestrated movements of cell sheets and of individual cells. Morphogenesis is important for creating the three germ layers of the early embryo (ectoderm, mesoderm, and endoderm) and for building up complex structures during organ development.
Tissue growth involves both an overall increase in tissue size, and also the differential growth of parts (allometry) which contributes to morphogenesis. Growth mostly occurs through cell proliferation but also through changes in cell size or the deposition of extracellular materials.
The development of plants involves similar processes to that of animals. However, plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development.
Generative biology
Generative biology is the generative science that explores the dynamics guiding the development and evolution of a biological morphological form.
Developmental processes
Cell differentiation
Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation.
Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.
Regeneration
Regeneration indicates the ability to regrow a missing part. This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: Hydra, which can regenerate any part of the polyp from a small fragment, and planarian worms, which can usually regenerate both heads and tails. Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent. The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, and the limbs of urodele amphibians. Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo.
There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected.
Embryonic development of animals
The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote. This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm. These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions.
Mouse epiblast primordial germ cells (see Figure: "The initial stages of human embryogenesis") undergo extensive epigenetic reprogramming. This process involves genome-wide DNA demethylation, chromatin reorganization and epigenetic imprint erasure leading to totipotency. DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway.
Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm, mesoderm and endoderm. These sheets are known as germ layers. This is the process of gastrulation. During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta, needed for support and nutrition of the embryo, and also establish differences of commitment along the anteroposterior axis (head, trunk and tail).
Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside.
Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. In addition, first morphogenetic movements of embryogenesis, such as gastrulation, epiboly and twisting, directly activate pathways involved in endomesoderm specification through mechanotransduction processes. This property was suggested to be evolutionary inherited from endomesoderm specification as mechanically stimulated by marine environmental hydrodynamic flow in first animal organisms (first metazoa). Twisting along the body axis by a left-handed chirality is found in all chordates (including vertebrates) and is addressed by the axial twist theory.
Growth in embryos is mostly autonomous. For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy.
The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events.
Metamorphosis
Developmental processes are very evident during the process of metamorphosis. This occurs in various types of animal. Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage.
All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog Xenopus, and the biology of the imaginal discs, which generate the adult body parts of the fly Drosophila melanogaster.
Plant development
Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology.
Plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature.
The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium.
In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Morphological variation
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Evolution of plant morphology
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism.
Developmental model organisms
Much of developmental biology research in recent decades has focused on the use of a small number of model organisms. It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans.
Plants
Thale cress (Arabidopsis thaliana)
Vertebrates
Frog: Xenopus (X. laevis and X. tropicalis). Good embryo supply. Especially suitable for microsurgery.
Zebrafish: Danio rerio. Good embryo supply. Well developed genetics.
Chicken: Gallus gallus. Early stages similar to mammal, but microsurgery easier. Low cost.
Mouse: Mus musculus. A mammal with well developed genetics.
Invertebrates
Fruit fly: Drosophila melanogaster. Good embryo supply. Well developed genetics.
Nematode: Caenorhabditis elegans. Good embryo supply. Well developed genetics. Low cost.
Unicellular
Algae: Chlamydomonas
Yeast: Saccharomyces
Others
Also popular for some purposes have been sea urchins and ascidians. For studies of regeneration urodele amphibians such as the axolotl Ambystoma mexicanum are used, and also planarian worms such as Schmidtea mediterranea. Organoids have also been demonstrated as an efficient model for development. Plant development has focused on the thale cress Arabidopsis thaliana as a model organism.
See also
References
Further reading
External links
Society for Developmental Biology
Collaborative resources
Developmental Biology - 10th edition
Essential Developmental Biology 3rd edition
Embryo Project Encyclopedia
Philosophy of biology | Developmental biology | [
"Biology"
] | 3,492 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
8,454 | https://en.wikipedia.org/wiki/Double%20planet | In astronomy, a double planet (also binary planet) is a binary satellite system where both objects are planets, or planetary-mass objects, and whose joint barycenter is external to both planetary bodies.
Although up to a third of the star systems in the Milky Way are binary, double planets are expected to be much rarer given the typical planet to satellite mass ratio is around 1:10000, they are influenced heavily by the gravitational pull of the parent star and according to the giant-impact hypothesis are gravitationally stable only under particular circumstances.
The Solar System does not have an official double planet, however the Earth–Moon system is sometimes considered to be one. In promotional materials advertising the SMART-1 mission, the European Space Agency referred to the Earth–Moon system as a double planet.
Several dwarf planet candidates can be described as binary planets. At its 2006 General Assembly, the International Astronomical Union considered a proposal that Pluto and Charon be reclassified as a double planet, but the proposal was abandoned in favor of the current IAU definition of planet. Other trans-Neptunian systems with proportionally large planetary-mass satellites include Eris–Dysnomia, Orcus–Vanth and Varda–Ilmarë.
Binary asteroids with components of roughly equal mass are sometimes referred to as double minor planets. These include binary asteroids 69230 Hermes and 90 Antiope and binary Kuiper belt objects (KBOs) 79360 Sila–Nunam and .
Definition of "double planet"
There is debate as to what criteria should be used to distinguish a "double planet" from a "planet–moon system". The following are considerations.
Both bodies satisfy planet criterion
A definition proposed in the Astronomical Journal calls for both bodies to individually satisfy an orbit-clearing criterion in order to be called a double planet.
Mass ratios closer to 1
One important consideration for defining "double planets" is the ratio of the masses of the two bodies. A mass ratio of 1 would indicate bodies of equal mass, and bodies with mass ratios closer to 1 are more attractive to label as "doubles". Using this definition, the satellites of Mars, Jupiter, Saturn, Uranus, and Neptune can all easily be excluded; they all have masses less than 0.00025 () of the planets around which they revolve. Some dwarf planets, too, have satellites substantially less massive than the dwarf planets themselves.
The most notable exception is the Pluto–Charon system. The Charon-to-Pluto mass ratio of 0.122 (≈ ) is close enough to 1 that Pluto and Charon have frequently been described by many scientists as "double dwarf planets" ("double planets" prior to the 2006 definition of "planet"). The International Astronomical Union (IAU) earlier classified Charon as a satellite of Pluto, but had also explicitly expressed the willingness to reconsider the bodies as double dwarf planets in the future. However, a 2006 IAU report classified Charon–Pluto as a double planet.
The Moon-to-Earth mass ratio of 0.01230 (≈ ) is also notably close to 1 when compared to all other satellite-to-planet ratios. Consequently, some scientists view the Earth–Moon system as a double planet as well, though this is a minority view. Eris's lone satellite, Dysnomia, has a radius somewhere around that of Eris; assuming similar densities (Dysnomia's compositional make-up may or may not differ substantially from Eris's), the mass ratio would be near , a value intermediate to the Moon–Earth and Charon–Pluto ratios.
Center-of-mass position
Currently, the most commonly proposed definition for a double-planet system is one in which the barycenter, around which both bodies orbit, lies outside both bodies. Under this definition, Pluto and Charon are double dwarf planets, since they orbit a point clearly outside of Pluto, as visible in animations created from images of the New Horizons space probe in June 2015.
Under this definition, the Earth–Moon system is not currently a double planet; although the Moon is massive enough to cause the Earth to make a noticeable revolution around this center of mass, this point nevertheless lies well within Earth. However, the Moon currently migrates outward from Earth at a rate of approximately per year; in a few billion years, the Earth–Moon system's center of mass will lie outside Earth, which would make it a double-planet system.
The center of mass of the Jupiter–Sun system lies outside the surface of the Sun, though arguing that Jupiter and the Sun are a double star is not analogous to arguing Pluto–Charon is a double dwarf planet. Jupiter is too light to be a fusor; were it thirteen times heavier, it would achieve deuterium fusion and become a brown dwarf.
Tug-of-war value
Isaac Asimov suggested a distinction between planet–moon and double-planet structures based in part on what he called a "tug-of-war" value, which does not consider their relative sizes. This quantity is simply the ratio of the force exerted on the smaller body by the larger (primary) body to the force exerted on the smaller body by the Sun. This can be shown to equal
where is the mass of the primary (the larger body), is the mass of the Sun, is the distance between the smaller body and the Sun, and is the distance between the smaller body and the primary. The tug-of-war value does not rely on the mass of the satellite (the smaller body).
This formula actually reflects the relation of the gravitational effects on the smaller body from the larger body and from the Sun. The tug-of-war figure for Saturn's moon Titan is 380, which means that Saturn's hold on Titan is 380 times as strong as the Sun's hold on Titan. Titan's tug-of-war value may be compared with that of Saturn's moon Phoebe, which has a tug-of-war value of just 3.5; that is, Saturn's hold on Phoebe is only 3.5 times as strong as the Sun's hold on Phoebe.
Asimov calculated tug-of-war values for several satellites of the planets. He showed that even the largest gas giant, Jupiter, had only a slightly better hold than the Sun on its outer captured satellites, some with tug-of-war values not much higher than one. In nearly every one of Asimov's calculations the tug-of-war value was found to be greater than one, so in those cases the Sun loses the tug-of-war with the planets. The one exception was Earth's Moon, where the Sun wins the tug-of-war with a value of 0.46, which means that Earth's hold on the Moon is less than half as strong as the Sun's. Asimov included this with his other arguments that Earth and the Moon should be considered a binary planet.
See the Path of Earth and Moon around Sun section in the "Orbit of the Moon" article for a more detailed explanation.
This definition of double planet depends on the pair's distance from the Sun. If the Earth–Moon system happened to orbit farther away from the Sun than it does now, then Earth would win the tug of war. For example, at the orbit of Mars, the Moon's tug-of-war value would be 1.05. Also, several tiny moons discovered since Asimov's proposal would qualify as double planets by this argument. Neptune's small outer moons Neso and Psamathe, for example, have tug-of-war values of 0.42 and 0.44, less than that of Earth's Moon. Yet their masses are tiny compared to Neptune's, with an estimated ratio of 1.5 () and 0.4 ().
Formation of the system
A final consideration is the way in which the two bodies came to form a system. Both the Earth–Moon and Pluto–Charon systems are thought to have been formed as a result of giant impacts: one body was impacted by a second body, resulting in a debris disk, and through accretion, either two new bodies formed or one new body formed, with the larger body remaining (but changed). However, a giant impact is not a sufficient condition for two bodies being "double planets" because such impacts can also produce tiny satellites, such as the four small outer satellites of Pluto.
A now-abandoned hypothesis for the origin of the Moon was actually called the "double-planet hypothesis"; the idea was that the Earth and the Moon formed in the same region of the Solar System's proto-planetary disk, forming a system under gravitational interaction. This idea, too, is a problematic condition for defining two bodies as "double planets" because planets can "capture" moons through gravitational interaction. For example, the moons of Mars (Phobos and Deimos) are thought to be asteroids captured long ago by Mars. Such a definition would also deem Neptune–Triton a double planet, since Triton was a Kuiper belt body the same size and of similar composition to Pluto, later captured by Neptune.
See also
2006 definition of planet
3753 Cruithne
Co-orbital configuration
Definition of a planet
Ecliptic
Hill sphere
Natural satellite
Orbit of the Moon
Quasi-satellite
Satellite system (astronomy)
References
Informational notes
Citations
Bibliography
Further reading
External links
Types of planet
Binary systems | Double planet | [
"Astronomy"
] | 1,948 | [
"Astronomical objects",
"Binary systems"
] |
8,456 | https://en.wikipedia.org/wiki/Denaturation%20%28biochemistry%29 | In biochemistry, denaturation is a process in which proteins or nucleic acids lose folded structure present in their native state due to various factors, including application of some external stress or compound, such as a strong acid or base, a concentrated inorganic salt, an organic solvent (e.g., alcohol or chloroform), agitation and radiation, or heat. If proteins in a living cell are denatured, this results in disruption of cell activity and possibly cell death. Protein denaturation is also a consequence of cell death. Denatured proteins can exhibit a wide range of characteristics, from conformational change and loss of solubility or dissociation of cofactors to aggregation due to the exposure of hydrophobic groups. The loss of solubility as a result of denaturation is called coagulation. Denatured proteins lose their 3D structure, and therefore, cannot function.
Proper protein folding is key to whether a globular or membrane protein can do its job correctly; it must be folded into the native shape to function. However, hydrogen bonds and cofactor-protein binding, which play a crucial role in folding, are rather weak, and thus, easily affected by heat, acidity, varying salt concentrations, chelating agents, and other stressors which can denature the protein. This is one reason why cellular homeostasis is physiologically necessary in most life forms.
Common examples
When food is cooked, some of its proteins become denatured. This is why boiled eggs become hard and cooked meat becomes firm.
A classic example of denaturing in proteins comes from egg whites, which are typically largely egg albumins in water. Fresh from the eggs, egg whites are transparent and liquid. Cooking the thermally unstable whites turns them opaque, forming an interconnected solid mass. The same transformation can be effected with a denaturing chemical. Pouring egg whites into a beaker of acetone will also turn egg whites translucent and solid. The skin that forms on curdled milk is another common example of denatured protein. The cold appetizer known as ceviche is prepared by chemically "cooking" raw fish and shellfish in an acidic citrus marinade, without heat.
Protein denaturation
Denatured proteins can exhibit a wide range of characteristics, from loss of solubility to protein aggregation.
Background
Proteins or polypeptides are polymers of amino acids. A protein is created by ribosomes that "read" RNA that is encoded by codons in the gene and assemble the requisite amino acid combination from the genetic instruction, in a process known as translation. The newly created protein strand then undergoes posttranslational modification, in which additional atoms or molecules are added, for example copper, zinc, or iron. Once this post-translational modification process has been completed, the protein begins to fold (sometimes spontaneously and sometimes with enzymatic assistance), curling up on itself so that hydrophobic elements of the protein are buried deep inside the structure and hydrophilic elements end up on the outside. The final shape of a protein determines how it interacts with its environment.
Protein folding consists of a balance between a substantial amount of weak intra-molecular interactions within a protein (Hydrophobic, electrostatic, and Van Der Waals Interactions) and protein-solvent interactions. As a result, this process is heavily reliant on environmental state that the protein resides in. These environmental conditions include, and are not limited to, temperature, salinity, pressure, and the solvents that happen to be involved. Consequently, any exposure to extreme stresses (e.g. heat or radiation, high inorganic salt concentrations, strong acids and bases) can disrupt a protein's interaction and inevitably lead to denaturation.
When a protein is denatured, secondary and tertiary structures are altered but the peptide bonds of the primary structure between the amino acids are left intact. Since all structural levels of the protein determine its function, the protein can no longer perform its function once it has been denatured. This is in contrast to intrinsically unstructured proteins, which are unfolded in their native state, but still functionally active and tend to fold upon binding to their biological target.
How denaturation occurs at levels of protein structure
In quaternary structure denaturation, protein sub-units are dissociated and/or the spatial arrangement of protein subunits is disrupted.
Tertiary structure denaturation involves the disruption of:
Covalent interactions between amino acid side-chains (such as disulfide bridges between cysteine groups)
Non-covalent dipole-dipole interactions between polar amino acid side-chains (and the surrounding solvent)
Van der Waals (induced dipole) interactions between nonpolar amino acid side-chains.
In secondary structure denaturation, proteins lose all regular repeating patterns such as alpha-helices and beta-pleated sheets, and adopt a random coil configuration.
Primary structure, such as the sequence of amino acids held together by covalent peptide bonds, is not disrupted by denaturation.
Loss of function
Most biological substrates lose their biological function when denatured. For example, enzymes lose their activity, because the substrates can no longer bind to the active site, and because amino acid residues involved in stabilizing substrates' transition states are no longer positioned to be able to do so. The denaturing process and the associated loss of activity can be measured using techniques such as dual-polarization interferometry, CD, QCM-D and MP-SPR.
Loss of activity due to heavy metals and metalloids
By targeting proteins, heavy metals have been known to disrupt the function and activity carried out by proteins. Heavy metals fall into categories consisting of transition metals as well as a select amount of metalloid. These metals, when interacting with native, folded proteins, tend to play a role in obstructing their biological activity. This interference can be carried out in a different number of ways. These heavy metals can form a complex with the functional side chain groups present in a protein or form bonds to free thiols. Heavy metals also play a role in oxidizing amino acid side chains present in protein. Along with this, when interacting with metalloproteins, heavy metals can dislocate and replace key metal ions. As a result, heavy metals can interfere with folded proteins, which can strongly deter protein stability and activity.
Reversibility and irreversibility
In many cases, denaturation is reversible (the proteins can regain their native state when the denaturing influence is removed). This process can be called renaturation. This understanding has led to the notion that all the information needed for proteins to assume their native state was encoded in the primary structure of the protein, and hence in the DNA that codes for the protein, the so-called "Anfinsen's thermodynamic hypothesis".
Denaturation can also be irreversible. This irreversibility is typically a kinetic, not thermodynamic irreversibility, as a folded protein generally has lower free energy than when it is unfolded. Through kinetic irreversibility, the fact that the protein is stuck in a local minimum can stop it from ever refolding after it has been irreversibly denatured.
Protein denaturation due to pH
Denaturation can also be caused by changes in the pH which can affect the chemistry of the amino acids and their residues. The ionizable groups in amino acids are able to become ionized when changes in pH occur. A pH change to more acidic or more basic conditions can induce unfolding. Acid-induced unfolding often occurs between pH 2 and 5, base-induced unfolding usually requires pH 10 or higher.
Nucleic acid denaturation
Nucleic acids (including RNA and DNA) are nucleotide polymers synthesized by polymerase enzymes during either transcription or DNA replication. Following 5'-3' synthesis of the backbone, individual nitrogenous bases are capable of interacting with one another via hydrogen bonding, thus allowing for the formation of higher-order structures. Nucleic acid denaturation occurs when hydrogen bonding between nucleotides is disrupted, and results in the separation of previously annealed strands. For example, denaturation of DNA due to high temperatures results in the disruption of base pairs and the separation of the double stranded helix into two single strands. Nucleic acid strands are capable of re-annealling when "normal" conditions are restored, but if restoration occurs too quickly, the nucleic acid strands may re-anneal imperfectly resulting in the improper pairing of bases.
Biologically-induced denaturation
The non-covalent interactions between antiparallel strands in DNA can be broken in order to "open" the double helix when biologically important mechanisms such as DNA replication, transcription, DNA repair or protein binding are set to occur. The area of partially separated DNA is known as the denaturation bubble, which can be more specifically defined as the opening of a DNA double helix through the coordinated separation of base pairs.
The first model that attempted to describe the thermodynamics of the denaturation bubble was introduced in 1966 and called the Poland-Scheraga Model. This model describes the denaturation of DNA strands as a function of temperature. As the temperature increases, the hydrogen bonds between the base pairs are increasingly disturbed and "denatured loops" begin to form. However, the Poland-Scheraga Model is now considered elementary because it fails to account for the confounding implications of DNA sequence, chemical composition, stiffness and torsion.
Recent thermodynamic studies have inferred that the lifetime of a singular denaturation bubble ranges from 1 microsecond to 1 millisecond. This information is based on established timescales of DNA replication and transcription. Currently, biophysical and biochemical research studies are being performed to more fully elucidate the thermodynamic details of the denaturation bubble.
Denaturation due to chemical agents
With polymerase chain reaction (PCR) being among the most popular contexts in which DNA denaturation is desired, heating is the most frequent method of denaturation. Other than denaturation by heat, nucleic acids can undergo the denaturation process through various chemical agents such as formamide, guanidine, sodium salicylate, dimethyl sulfoxide (DMSO), propylene glycol, and urea. These chemical denaturing agents lower the melting temperature (Tm) by competing for hydrogen bond donors and acceptors with pre-existing nitrogenous base pairs. Some agents are even able to induce denaturation at room temperature. For example, alkaline agents (e.g. NaOH) have been shown to denature DNA by changing pH and removing hydrogen-bond contributing protons. These denaturants have been employed to make Denaturing Gradient Gel Electrophoresis gel (DGGE), which promotes denaturation of nucleic acids in order to eliminate the influence of nucleic acid shape on their electrophoretic mobility.
Chemical denaturation as an alternative
The optical activity (absorption and scattering of light) and hydrodynamic properties (translational diffusion, sedimentation coefficients, and rotational correlation times) of formamide denatured nucleic acids are similar to those of heat-denatured nucleic acids. Therefore, depending on the desired effect, chemically denaturing DNA can provide a gentler procedure for denaturing nucleic acids than denaturation induced by heat. Studies comparing different denaturation methods such as heating, beads mill of different bead sizes, probe sonication, and chemical denaturation show that chemical denaturation can provide quicker denaturation compared to the other physical denaturation methods described. Particularly in cases where rapid renaturation is desired, chemical denaturation agents can provide an ideal alternative to heating. For example, DNA strands denatured with alkaline agents such as NaOH renature as soon as phosphate buffer is added.
Denaturation due to air
Small, electronegative molecules such as nitrogen and oxygen, which are the primary gases in air, significantly impact the ability of surrounding molecules to participate in hydrogen bonding. These molecules compete with surrounding hydrogen bond acceptors for hydrogen bond donors, therefore acting as "hydrogen bond breakers" and weakening interactions between surrounding molecules in the environment. Antiparellel strands in DNA double helices are non-covalently bound by hydrogen bonding between base pairs; nitrogen and oxygen therefore maintain the potential to weaken the integrity of DNA when exposed to air. As a result, DNA strands exposed to air require less force to separate and exemplify lower melting temperatures.
Applications
Many laboratory techniques rely on the ability of nucleic acid strands to separate. By understanding the properties of nucleic acid denaturation, the following methods were created:
PCR
Southern blot
Northern blot
DNA sequencing
Denaturants
Protein denaturants
Acids
Acidic protein denaturants include:
Acetic acid
Trichloroacetic acid 12% in water
Sulfosalicylic acid
Bases
Bases work similarly to acids in denaturation. They include:
Sodium bicarbonate
Solvents
Most organic solvents are denaturing, including:
Ethanol
Cross-linking reagents
Cross-linking agents for proteins include:
Formaldehyde
Glutaraldehyde
Chaotropic agents
Chaotropic agents include:
Urea 6–8 mol/L
Guanidinium chloride 6 mol/L
Lithium perchlorate 4.5 mol/L
Sodium dodecyl sulfate
Disulfide bond reducers
Agents that break disulfide bonds by reduction include:
2-Mercaptoethanol
Dithiothreitol
TCEP (tris(2-carboxyethyl)phosphine)
Chemically reactive agents
Agents such as hydrogen peroxide, elemental chlorine, hypochlorous acid (chlorine water), bromine, bromine water, iodine, nitric and oxidising acids, and ozone react with sensitive moieties such as sulfide/thiol, activated aromatic rings (phenylalanine) in effect damage the protein and render it useless.
Other
Mechanical agitation
Picric acid
Radiation
Temperature
Nucleic acid denaturants
Chemical
Acidic nucleic acid denaturants include:
Acetic acid
HCl
Nitric acid
Basic nucleic acid denaturants include:
NaOH
Other nucleic acid denaturants include:
DMSO
Formamide
Guanidine
Sodium salicylate
Propylene glycol
Urea
Physical
Thermal denaturation
Beads mill
Probe sonication
Radiation
See also
Denatured alcohol
Equilibrium unfolding
Fixation (histology)
Molten globule
Protein folding
Random coil
References
External links
McGraw-Hill Online Learning Center — Animation: Protein Denaturation
Biochemical reactions
Nucleic acids
Protein structure | Denaturation (biochemistry) | [
"Chemistry",
"Biology"
] | 3,092 | [
"Biomolecules by chemical classification",
"Biochemical reactions",
"Structural biology",
"Biochemistry",
"Protein structure",
"Nucleic acids"
] |
8,463 | https://en.wikipedia.org/wiki/Dubnium | Dubnium is a synthetic chemical element; it has symbol Db and atomic number 105. It is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 16 hours. This greatly limits extended research on the element.
Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, resulting in credit for the discovery being officially shared between both teams. The element was formally named dubnium in 1997 after the town of Dubna, the site of the JINR.
Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this.
Introduction
Discovery
Background
Uranium, element 92, is the heaviest element to occur in significant quantities in nature; heavier elements can only be practically produced by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars.
Reports
The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Soviet Union, in April 1968. The scientists bombarded 243Am with a beam of 22Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV (t1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively.
+ → 265−x105 + x (x = 4, 5)
After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,xn) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105.
In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding 249Cf with 14N, Pb with 15N, and Hg with 15N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 256103, implying that the parent nuclei were of 260105.
+ → 260105 + 4
These results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 260105, leaving only 261105 as a possibly produced isotope.
JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105.
In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105.
Naming controversy
JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name bohrium (Bo) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory; they soon changed their proposal to nielsbohrium (Ns) to avoid confusion with boron. Another proposed name was dubnium. When LBL first announced their synthesis of element 105, they proposed that the new element be named hahnium (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy.
In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others—to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims, and the conflict remained unresolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be unnilpentium, from the Latin roots un- and nil- and the Greek root pent- (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims.
In 1981, the Gesellschaft für Schwerionenforschung (GSI; Society for Heavy Ion Research) in Darmstadt, Hesse, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name nielsbohrium for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first.
In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Transfermium Working Group (TWG) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams.
LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report.
In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed joliotium (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names rutherfordium and hahnium, originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name seaborgium for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery.
In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested seaborgium for element 106 in exchange for the removal of all the other American proposals, except for the established name lawrencium for element 103. The equally entrenched name nobelium for element 102 was replaced by flerovium after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name flerovium was later used for element 114.
In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named dubnium (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name dubnium had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names rutherfordium and seaborgium for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106.
Even after 1997, LBL still sometimes used the name hahnium for element 105 in their own material, doing so as recently as 2014. However, the problem was resolved in the literature as Jens Volker Kratz, editor of Radiochimica Acta, refused to accept papers not using the 1997 IUPAC nomenclature.
Isotopes
Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, 268Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production.
The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron–proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.)
Only a few atoms of 268Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. As of 2022, following additional experiments performed at the JINR's Superheavy Element Factory (which started operations in 2019), the half-life of 268Db is measured to be hours. The second most stable isotope, 270Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei 288Mc and 294Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for 48Ca beams. For its mass, 48Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers.
Predicted properties
According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of the periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms.
Atomic and physical
A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV.
A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons.
Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.)
A singly ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry.
Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 21.6 g/cm3.
Chemical
Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine).
Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum.
Experimental chemistry
Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium.
The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table.
This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either or . After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed.
The first isothermal gas chromatography experiments were performed in 1992 with 262Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of , which was predicted to be less volatile than . Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of . Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides.
In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, 268Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium.
In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms and tantalum forms . Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed . From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum.
In 2021, the volatile heavy group 5 oxychlorides MOCl3 (M = Nb, Ta, Db) were experimentally studied at the JAEA tandem accelerator. The trend in volatilities was found to be NbOCl3 > TaOCl3 ≥ DbOCl3, so that dubnium behaves in line with periodic trends.
Notes
References
Bibliography
Chemical elements
Transition metals
Synthetic elements
Chemical elements with body-centered cubic structure | Dubnium | [
"Physics",
"Chemistry"
] | 5,176 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
8,464 | https://en.wikipedia.org/wiki/Disaccharide | A disaccharide (also called a double sugar or biose) is the sugar formed when two monosaccharides are joined by glycosidic linkage. Like monosaccharides, disaccharides are simple sugars soluble in water. Three common examples are sucrose, lactose, and maltose.
Disaccharides are one of the four chemical groupings of carbohydrates (monosaccharides, disaccharides, oligosaccharides, and polysaccharides). The most common types of disaccharides—sucrose, lactose, and maltose—have 12 carbon atoms, with the general formula C12H22O11. The differences in these disaccharides are due to atomic arrangements within the molecule.
The joining of monosaccharides into a double sugar happens by a condensation reaction, which involves the elimination of a water molecule from the functional groups only. Breaking apart a double sugar into its two monosaccharides is accomplished by hydrolysis with the help of a type of enzyme called a disaccharidase. As building the larger sugar ejects a water molecule, breaking it down consumes a water molecule. These reactions are vital in metabolism. Each disaccharide is broken down with the help of a corresponding disaccharidase (sucrase, lactase, and maltase).
Classification
There are two functionally different classes of disaccharides:
Reducing disaccharides, in which one monosaccharide, the reducing sugar of the pair, still has a free hemiacetal unit that can perform as a reducing aldehyde group; lactose, maltose and cellobiose are examples of reducing disaccharides, each with one hemiacetal unit, the other occupied by the glycosidic bond, which prevents it from acting as a reducing agent. They can easily be detected by the Woehlk test or Fearon's test on methylamine.
Non-reducing disaccharides, in which the component monosaccharides bond through an acetal linkage between their anomeric centers. This results in neither monosaccharide being left with a hemiacetal unit that is free to act as a reducing agent. Sucrose and trehalose are examples of non-reducing disaccharides because their glycosidic bond is between their respective hemiacetal carbon atoms. The reduced chemical reactivity of the non-reducing sugars, in comparison to reducing sugars, may be an advantage where stability in storage is important.
Formation
The formation of a disaccharide molecule from two monosaccharide molecules proceeds by displacing a hydroxy group from one molecule and a hydrogen nucleus (a proton) from the other, so that the new vacant bonds on the monosaccharides join the two monomers together. Because of the removal of the water molecule from the product, the term of convenience for such a process is "dehydration reaction" (also "condensation reaction" or "dehydration synthesis"). For example, milk sugar (lactose) is a disaccharide made by condensation of one molecule of each of the monosaccharides glucose and galactose, whereas the disaccharide sucrose in sugar cane and sugar beet, is a condensation product of glucose and fructose. Maltose, another common disaccharide, is condensed from two glucose molecules.
The dehydration reaction that bonds monosaccharides into disaccharides (and also bonds monosaccharides into more complex polysaccharides) forms what are called glycosidic bonds.
Properties
The glycosidic bond can be formed between any hydroxy group on the component monosaccharide. So, even if both component sugars are the same (e.g., glucose), different bond combinations (regiochemistry) and stereochemistry (alpha- or beta-) result in disaccharides that are diastereoisomers with different chemical and physical properties. Depending on the monosaccharide constituents, disaccharides are sometimes crystalline, sometimes water-soluble, and sometimes sweet-tasting and sticky-feeling. Disaccharides can serve as functional groups by forming glycosidic bonds with other organic compounds, forming glycosides.
Assimilation
Digestion of disaccharides involves breakdown into monosaccharides.
Common disaccharides
{| class="wikitable"
|-
! Disaccharide
! Unit 1
! Unit 2
! Bond
|-
| Sucrose (table sugar, cane sugar, beet sugar, or saccharose)
| Glucose || Fructose || α(1→2)β
|-
| Lactose (milk sugar)
| Galactose || Glucose || β(1→4)
|-
| Maltose (malt sugar)
| Glucose || Glucose || α(1→4)
|-
| Trehalose
| Glucose || Glucose || α(1→1)α
|-
| Cellobiose
| Glucose || Glucose || β(1→4)
|-
| Chitobiose
| Glucosamine || Glucosamine || β(1→4)
|}
Maltose, cellobiose, and chitobiose are hydrolysis products of the polysaccharides starch, cellulose, and chitin, respectively.
Less common disaccharides include:
{| class="wikitable"
|-
! Disaccharide
! Units
! Bond
|-
| Kojibiose || Two glucoses || α(1→2)
|-
| Nigerose || Two glucoses || α(1→3)
|-
| Isomaltose || Two glucoses || α(1→6)
|-
| β,β-Trehalose || Two glucoses || β(1→1)β
|-
| α,β-Trehalose || Two glucoses || α(1→1)β
|-
| Sophorose || Two glucoses || β(1→2)
|-
| Laminaribiose || Two glucoses || β(1→3)
|-
| Gentiobiose || Two glucoses || β(1→6)
|-
| Trehalulose
| One glucose and one fructose
| α(1→1)
|-
| Turanose || One glucose and one fructose || α(1→3)
|-
| Maltulose || One glucose and one fructose || α(1→4)
|-
| Leucrose || One glucose and one fructose || α(1→5)
|-
| Isomaltulose || One glucose and one fructose || α(1→6)
|-
| Gentiobiulose || One glucose and one fructose || β(1→6)
|-
| Mannobiose || Two mannoses || Either α(1→2), α(1→3), α(1→4), or α(1→6)
|-
| Melibiose || One galactose and one glucose || α(1→6)
|-
| Allolactose || One galactose and one glucose || β(1→6)
|-
| Melibiulose || One galactose and one fructose || α(1→6)
|-
| Lactulose || One galactose and one fructose || β(1→4)
|-
| Rutinose || One rhamnose and one glucose || α(1→6)
|-
| Rutinulose || One rhamnose and one fructose || β(1→6)
|-
| Xylobiose || Two xylopyranoses || β(1→4)
|}
References
External links
Carbohydrate chemistry | Disaccharide | [
"Chemistry"
] | 1,769 | [
"Glycobiology",
"nan",
"Carbohydrate chemistry",
"Chemical synthesis"
] |
8,466 | https://en.wikipedia.org/wiki/Dorado | Dorado (, ) is a constellation in the Southern Sky. It was named in the late 16th century and is now one of the 88 modern constellations. Its name refers to the mahi-mahi (Coryphaena hippurus), which is known as dorado ("golden") in Spanish, although it has also been depicted as a swordfish. Dorado contains most of the Large Magellanic Cloud, the remainder being in the constellation Mensa. The South Ecliptic pole also lies within this constellation.
Even though the name Dorado is not Latin but Spanish, astronomers give it the Latin genitive form Doradus when naming its stars; it is treated (like the adjacent asterism Argo Navis) as a feminine proper name of Greek origin ending in -ō (like Io or Callisto or Argo), which have a genitive ending -ūs.
History
Dorado was one of twelve constellations named by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It appeared:
On a celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius.
First depiction in a celestial atlas, in Johann Bayer's Uranometria of 1603.
In Johannes Kepler's edition of Tycho Brahe's star list in the Rudolphine Tables of 1627; this was the first time that it was given the alternative name Xiphias, the swordfish. The name Dorado ultimately became dominant and was adopted by the IAU.
Dorado represents a dolphinfish; it has also been called the goldfish because Dorado are gold-colored.
Features
Stars
Alpha Doradus is a blue-white star of magnitude 3.3, 176 light-years from Earth. It is the brightest star in Dorado. Beta Doradus is a notably bright Cepheid variable star. It is a yellow-tinged supergiant star that has a minimum magnitude of 4.1 and a maximum magnitude of 3.5. One thousand and forty light-years from Earth, Beta Doradus has a period of 9 days and 20 hours.
R Doradus is one of the many variable stars in Dorado. S Dor, 9.721 hypergiant in the Large Magellanic Cloud, is the prototype of S Doradus variable stars. The variable star R Doradus 5.73 has the largest-known apparent size of any star other than the Sun. Gamma Doradus is the prototype of the Gamma Doradus variable stars.
Supernova 1987A was the closest supernova to occur since the invention of the telescope. SNR 0509-67.5 is the remnant of an unusually energetic Type 1a supernova from about 400 years ago.
HE 0437-5439 is a hypervelocity star escaping from the Milky Way/Magellanic Cloud system.
Dorado is also the location of the South Ecliptic pole, which lies near the fish's head. The pole was called "Polus Doradinalis" by Philipp von Zesen, aka Caesius.
In early 2020, the exoplanet TOI-700 d was discovered orbiting the star TOI-700 in Dorado. This is the first potentially Earth-like exoplanet to be discovered by the Transiting Exoplanet Survey Satellite.
Deep-sky objects
Because Dorado contains part of the Large Magellanic Cloud, it is rich in deep sky objects. The Large Magellanic Cloud, 25,000 light-years in diameter, is a satellite galaxy of the Milky Way Galaxy, located at a distance of 179,000 light-years. It has been deformed by its gravitational interactions with the larger Milky Way. In 1987, it became host to SN 1987A, the first supernova of 1987 and the closest since 1604. This 25,000-light-year-wide galaxy contains over 10,000 million stars. All coordinates given are for Epoch J2000.0.
N 180B is an emission nebula located in the Large Magellanic Cloud.
NGC 1566 (RA 04h 20m 00s Dec -56° 56.3′) is a face-on spiral galaxy. It gives its name to the NGC 1566 Group of galaxies.
NGC 1755 (RA 04h 55m 13s Dec -68° 12.2′) is a globular cluster.
NGC 1763 (RA 04h 56m 49s Dec -68° 24.5′) is a bright nebula associated with three type B stars.
NGC 1761 (RA 04h 56m 37s Dec -66° 28.4') is an open cluster.
NGC 1820 (RA 05h 04m 02s Dec -67° 15.9′) is an open cluster.
NGC 1850 (RA 05h 08m 44s Dec -68° 45.7′) is a globular cluster.
NGC 1854 (RA 05h 09m 19s Dec -68° 50.8′) is a globular cluster.
NGC 1869 (RA 05h 13m 56s Dec -67° 22.8′) is an open cluster.
NGC 1901 (RA 05h 18m 15s Dec -68° 26.2′) is an open cluster.
NGC 1910 (RA 05h 18m 43s Dec -69° 13.9′) is an open cluster.
NGC 1936 (RA 05h 22m 14s Dec -67° 58.7′) is a bright nebula and is one of four NGC objects in close proximity, the others being NGC 1929, NGC 1934 and NGC 1935.
NGC 1978 (RA 05h 28m 36s Dec -66° 14.0′) is an open cluster.
NGC 2002 (RA 05h 30m 17s Dec -66° 53.1′) is an open cluster.
NGC 2014 (RA 05h 44m 12.7s Dec −67° 42′ 57″) is a red emission nebula.
NGC 2020 (RA 05h 44m 12.7s Dec −67° 42′ 57″) is an HII region surrounding a Wolf–Rayet star.
NGC 2027 (RA 05h 35m 00s Dec -66° 55.0′) is an open cluster.
NGC 2032 (RA 05h 35m 21s Dec -67° 34.1′; also known as "Seagull Nebula") is a nebula complex that contains four NGC designations: NGC 2029, NGC 2032, NGC 2035 and NGC 2040.
NGC 2074 (RA 05h 39m 03.0s Dec −69° 29′ 54″) is an emission nebula.
NGC 2078 (RA 05h 39m 54s Dec −69° 44′ 54″) is an emission nebula.
NGC 2080, also called the "Ghost Head Nebula", is an emission nebula that is 50 light-years wide in the Large Magellanic Cloud. It is named for the two distinct white patches that it possesses, which are regions of recent star formation. The western portion is colored green from doubly ionized oxygen, the southern portion is red from hydrogen alpha emissions, and the center region is colored yellow from both oxygen and hydrogen emissions. The western white patch, A1, has one massive, recently formed star inside. The eastern patch, A2, has several stars hidden in its dust.
Tarantula Nebula is in the Large Magellanic Cloud, named for its spiderlike shape. It is also designated 30 Doradus, as it is visible to the naked eye as a slightly out-of-focus star. Larger than any nebula in the Milky Way at 1,000 light-years in diameter, it is also brighter, because it is illuminated by the open star cluster NGC 2070, which has at its center the star cluster R136. The illuminating stars are supergiants.
NGC 2164 (RA 05h 58m 53s Dec -68° 30.9′) is a globular cluster.
N44 is a superbubble in the Large Magellanic Cloud that is 1,000 light-years wide. Its overall structure is shaped by the 40 hot stars towards its center. Within the superbubble of N44 is a smaller bubble catalogued as N44F. It is approximately 35 light-years in diameter and is shaped by an incredibly hot star at its center, which has a stellar wind speed of 7 million kilometers per hour. N44F also features dust columns with probable star formation hidden inside.
Equivalents
In Chinese astronomy, the stars of Dorado are in two of Xu Guangqi's Southern Asterisms (近南極星區, Jìnnánjíxīngōu): the White Patches Attached (夾白, Jiābái) and the Goldfish (金魚, Jīnyú).
Namesakes
Dorado (SS-248) and Dorado (SS-526), two United States Navy submarines, were named after the same sea creature as the constellation.
Gallery
See also
Dorado in Chinese astronomy
IAU-recognized constellations
References
Notes
The above deep sky objects appear in Norton's Star Atlas, 1973 edition.
Co-ordinates are obtained from Uranometria Chart Index and Skyview.
Images of the deep sky objects described herein may be viewed at Skyview.
Citations
Sources
External links
The Deep Photographic Guide to the Constellations: Dorado
The clickable Dorado
Peoria Astronomical Society - Dorado
Star Tales – Dorado
Southern constellations
Constellations listed by Petrus Plancius | Dorado | [
"Astronomy"
] | 1,965 | [
"Constellations listed by Petrus Plancius",
"Dorado",
"Southern constellations",
"Constellations"
] |
8,468 | https://en.wikipedia.org/wiki/Determinant | In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries.
The determinant of a matrix is
and the determinant of a matrix is
The determinant of an matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of (the factorial of ) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form.
Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the matrices that has the four following properties:
The determinant of the identity matrix is .
The exchange of two rows multiplies the determinant by .
Multiplying a row by a number multiplies the determinant by this number.
Adding a multiple of one row to another row does not change the determinant.
The above properties relating to rows (properties 2–4) may be replaced by the corresponding statements with respect to columns.
The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the determinant of a linear endomorphism, which does not depend on the choice of a coordinate system.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed -dimensional volume of a -dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the -dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
Two by two matrices
The determinant of a matrix is denoted either by "" or by vertical bars around the matrix, and is defined as
For example,
First properties
The determinant has several key properties that can be proved by direct evaluation of the definition for -matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix is 1.
Second, the determinant is zero if two rows are the same:
This holds similarly if the two columns are the same. Moreover,
Finally, if any column is multiplied by some number (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
Geometric meaning
If the matrix entries are real numbers, the matrix can be used to represent two linear maps: one that maps the standard basis vectors to the rows of , and one that maps them to the columns of . In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at , , , and , as shown in the accompanying diagram.
The absolute value of is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by . (The parallelogram formed by the columns of is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)
The absolute value of the determinant together with the sign becomes the signed area of the parallelogram. The signed area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix).
To show that is the signed area, one may consider a matrix containing two vectors and representing the parallelogram's sides. The signed area can be expressed as for the angle θ between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. , so that becomes the signed area in question, which can be determined by the pattern of the scalar product to be equal to according to the following equations:
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. When the determinant is equal to one, the linear mapping defined by the matrix is equi-areal and orientation-preserving.
The object known as the bivector is related to these ideas. In 2D, it can be interpreted as an oriented plane segment formed by imagining two vectors each with origin , and coordinates and . The bivector magnitude (denoted by ) is the signed area, which is also the determinant .
If an real matrix A is written in terms of its column vectors , then
This means that maps the unit n-cube to the n-dimensional parallelotope defined by the vectors the region
The determinant gives the signed n-dimensional volume of this parallelotope, and hence describes more generally the n-dimensional volume scaling factor of the linear transformation produced by A. (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully n-dimensional, which indicates that the dimension of the image of A is less than n. This means that A produces a linear transformation which is neither onto nor one-to-one, and so is not invertible.
Definition
Let A be a square matrix with n rows and n columns, so that it can be written as
The entries etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.
The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
Leibniz formula
3 × 3 matrices
The Leibniz formula for the determinant of a matrix is the following:
In this expression, each term has one factor from each row, all in different columns, arranged in increasing row order. For example, bdi has b from the first row second column, d from the second row first column, and i from the third row third column. The signs are determined by how many transpositions of factors are necessary to arrange the factors in increasing order of their columns (given that the terms are arranged left-to-right in increasing row order): positive for an even number of transpositions and negative for an odd number. For the example of bdi, the single transposition of bd to db gives dbi, whose three factors are from the first, second and third columns respectively; this is an odd number of transpositions, so the term appears with negative sign.
The rule of Sarrus is a mnemonic for the expanded form of this determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a matrix does not carry over into higher dimensions.
n × n matrices
Generalizing the above to higher dimensions, the determinant of an matrix is an expression involving permutations and their signatures. A permutation of the set is a bijective function from this set to itself, with values exhausting the entire set. The set of all such permutations, called the symmetric group, is commonly denoted . The signature of a permutation is if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is
Given a matrix
the Leibniz formula for its determinant is, using sigma notation for the sum,
Using pi notation for the product, this can be shortened into
.
The Levi-Civita symbol is defined on the -tuples of integers in as if two of the integers are equal, and otherwise as the signature of the permutation defined by the n-tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes
where the sum is taken over all -tuples of integers in
Properties
Characterization of the determinant
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an -matrix A as being composed of its columns, so denoted as
where the column vector (for each i) is composed of the entries of the matrix in the i-th column.
, where is an identity matrix.
The determinant is multilinear: if the jth column of a matrix is written as a linear combination of two column vectors v and w and a number r, then the determinant of A is expressible as a similar linear combination:
The determinant is alternating: whenever two columns of a matrix are identical, its determinant is 0:
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any -matrix A a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.
Immediate consequences
These rules have several further consequences:
The determinant is a homogeneous function, i.e., (for an matrix ).
Interchanging any pair of columns of a matrix multiplies its determinant by −1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above): This formula can be applied iteratively when several columns are swapped. For example Yet more generally, any permutation of the columns multiplies the determinant by the sign of the permutation.
If some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0.
Adding a scalar multiple of one column to another column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating.
If is a triangular matrix, i.e. , whenever or, alternatively, whenever , then its determinant equals the product of the diagonal entries: Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a diagonal matrix (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation which gives a non-zero contribution is the identity permutation.
Example
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix using that method:
Combining these equalities gives
Transpose
The determinant of the transpose of equals the determinant of A:
.
This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an matrix as being composed of n rows, the determinant is an n-linear function.
Multiplicativity and matrix groups
The determinant is a multiplicative map, i.e., for square matrices and of equal size, the determinant of a matrix product equals the product of their determinants:
This key fact can be proven by observing that, for a fixed matrix , both sides of the equation are alternating and multilinear as a function depending on the columns of . Moreover, they both take the value when is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.
A matrix with entries in a field is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
.
In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size over a field ) forms a group known as the general linear group (respectively, a subgroup called the special linear group . More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.
Because the determinant respects multiplication and inverses, it is in fact a group homomorphism from into the multiplicative group of nonzero elements of . This homomorphism is surjective and its kernel is (the matrices with determinant one). Hence, by the first isomorphism theorem, this shows that is a normal subgroup of , and that the quotient group is isomorphic to .
The Cauchy–Binet formula is a generalization of that product formula for rectangular matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.
Laplace expansion
Laplace expansion expresses the determinant of a matrix recursively in terms of determinants of smaller matrices, known as its minors. The minor is defined to be the determinant of the -matrix that results from by removing the -th row and the -th column. The expression is known as a cofactor. For every , one has the equality
which is called the Laplace expansion along the th row. For example, the Laplace expansion along the first row () gives the following formula:
Unwinding the determinants of these -matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the -th column is the equality
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix
The n-term Laplace expansion along a row or column can be generalized to write an n x n determinant as a sum of terms, each the product of the determinant of a k x k submatrix and the determinant of the complementary (n−k) x (n−k) submatrix.
Adjugate matrix
The adjugate matrix is the transpose of the matrix of the cofactors, that is,
For every matrix, one has
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:
Block matrices
The formula for the determinant of a -matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices of dimension , , and , respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is
If is invertible, then it follows with results from the section on multiplicativity that
which simplifies to when is a -matrix.
A similar result holds when is invertible, namely
Both results can be combined to derive Sylvester's determinant theorem, which is also stated below.
If the blocks are square matrices of the same size further formulas hold. For example, if and commute (i.e., ), then
This formula has been generalized to matrices composed of more than blocks, again under appropriate commutativity conditions among the individual blocks.
For and , the following formula holds (even if and do not commute)
Sylvester's determinant theorem
Sylvester's determinant theorem states that for A, an matrix, and B, an matrix (so that A and B have dimensions allowing them to be multiplied in either order forming a square matrix):
where Im and In are the and identity matrices, respectively.
From this general result several consequences follow.
A generalization is (see Matrix_determinant_lemma), where Z is an invertible matrix and W is an invertible matrix.
Sum
The determinant of the sum of two square matrices of the same size is not in general expressible in terms of the determinants of A and of B.
However, for positive semidefinite matrices , and of equal size,
with the corollary
Brunn–Minkowski theorem implies that the th root of determinant is a concave function, when restricted to Hermitian positive-definite matrices. Therefore, if and are Hermitian positive-definite matrices, one has
since the th root of the determinant is a homogeneous function.
Sum identity for 2×2 matrices
For the special case of matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity:
Properties of the determinant in relation to other notions
Eigenvalues and characteristic polynomial
The determinant is closely related to two other central concepts in linear algebra, the eigenvalues and the characteristic polynomial of a matrix. Let be an -matrix with complex entries. Then, by the Fundamental Theorem of Algebra, must have exactly n eigenvalues . (Here it is understood that an eigenvalue with algebraic multiplicity occurs times in this list.) Then, it turns out the determinant of is equal to the product of these eigenvalues,
The product of all non-zero eigenvalues is referred to as pseudo-determinant.
From this, one immediately sees that the determinant of a matrix is zero if and only if is an eigenvalue of . In other words, is invertible if and only if is not an eigenvalue of .
The characteristic polynomial is defined as
Here, is the indeterminate of the polynomial and is the identity matrix of the same size as . By means of this polynomial, determinants can be used to find the eigenvalues of the matrix : they are precisely the roots of this polynomial, i.e., those complex numbers such that
A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices
being positive, for all between and .
Trace
The trace tr(A) is by definition the sum of the diagonal entries of and also equals the sum of the eigenvalues. Thus, for complex matrices ,
or, for real matrices ,
Here exp() denotes the matrix exponential of , because every eigenvalue of corresponds to the eigenvalue exp() of exp(). In particular, given any logarithm of , that is, any matrix satisfying
the determinant of is given by
For example, for , , and , respectively,
cf. Cayley-Hamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the Faddeev–LeVerrier algorithm. That is, for generic , the signed constant term of the characteristic polynomial, determined recursively from
In the general case, this may also be obtained from
where the sum is taken over the set of all integers satisfying the equation
The formula can be expressed in terms of the complete exponential Bell polynomial of n arguments sl = −(l – 1)! tr(Al) as
This formula can also be used to find the determinant of a matrix with multidimensional indices and . The product and trace of such matrices are defined in a natural way as
An important arbitrary dimension identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of A is less than 1 in absolute value,
where is the identity matrix. More generally, if
is expanded as a formal power series in then all coefficients of for are zero and the remaining polynomial is .
Upper and lower bounds
For a positive definite matrix , the trace operator gives the following tight lower and upper bounds on the log determinant
with equality if and only if . This relationship can be derived via the formula for the Kullback-Leibler divergence between two multivariate normal distributions.
Also,
These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the well-known fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square.
Derivative
The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a polynomial function from to . In particular, it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:
where denotes the adjugate of . In particular, if is invertible, we have
Expressed in terms of the entries of , these are
Yet another equivalent formulation is
,
using big O notation. The special case where , the identity matrix, yields
This identity is used in describing Lie algebras associated to certain matrix Lie groups. For example, the special linear group is defined by the equation . The above formula shows that its Lie algebra is the special linear Lie algebra consisting of those matrices having trace zero.
Writing a -matrix as where are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two:
History
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations.
The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero).
In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinant-like entity.
Determinants proper originated separately from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693. stated, without proof, Cramer's rule. Both Cramer and also were led to determinants by the question of plane curves passing through a given set of points.
Vandermonde (1771) first recognized determinants as independent functions. gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word "determinant" in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.
used the functional determinant which Sylvester later called the Jacobian. In his memoirs in Crelle's Journal for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. introduced the modern notation for the determinant using vertical bars.
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
Applications
Cramer's rule
Determinants can be used to describe the solutions of a linear system of equations, written in matrix form as . This equation has a unique solution if and only if is nonzero. In this case, the solution is given by Cramer's rule:
where is the matrix formed by replacing the -th column of by the column vector . This follows immediately by column expansion of the determinant, i.e.
where the vectors are the columns of A. The rule is also implied by the identity
Cramer's rule can be implemented in time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.
Linear independence
Determinants can be used to characterize linearly dependent vectors: is zero if and only if the column vectors (or, equivalently, the row vectors) of the matrix are linearly dependent. For example, given two linearly independent vectors , a third vector lies in the plane spanned by the former two vectors exactly if the determinant of the -matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given functions (supposed to be times differentiable), the Wronskian is defined to be
It is non-zero (for some ) in a specified interval if and only if the given functions and all their derivatives up to order are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. Another such use of the determinant is the resultant, which gives a criterion when two polynomials have a common root.
Orientation of a basis
The determinant can be thought of as assigning a number to every sequence of n vectors in Rn, by using the square matrix whose columns are the given vectors. The determinant will be nonzero if and only if the sequence of vectors is a basis for Rn. In that case, the sign of the determinant determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. In the case of an orthogonal basis, the magnitude of the determinant is equal to the product of the lengths of the basis vectors. For instance, an orthogonal matrix with entries in Rn represents an orthonormal basis in Euclidean space, and hence has determinant of ±1 (since all the vectors have length 1). The determinant is +1 if and only if the basis has the same orientation. It is −1 if and only if the basis has the opposite orientation.
More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (if A is an orthogonal or matrix, this is a rotation), while if it is negative, A switches the orientation of the basis.
Volume and Jacobian determinant
As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if is the linear map given by multiplication with a matrix , and is any measurable subset, then the volume of is given by times the volume of . More generally, if the linear map is represented by the -matrix , then the -dimensional volume of is given by:
By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices , , or any other combination of pairs of vertices that form a spanning tree over the vertices.
For a general differentiable function, much of the above carries over by considering the Jacobian matrix of f. For
the Jacobian matrix is the matrix whose entries are given by the partial derivatives
Its determinant, the Jacobian determinant, appears in the higher-dimensional version of integration by substitution: for suitable functions f and an open subset U of Rn (the domain of f), the integral over f(U) of some other function is given by
The Jacobian also occurs in the inverse function theorem.
When applied to the field of Cartography, the determinant can be used to measure the rate of expansion of a map near the poles.
Abstract algebraic aspects
Determinant of an endomorphism
The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that . Indeed, repeatedly applying the above identities yields
The determinant is therefore also called a similarity invariant. The determinant of a linear transformation
for some finite-dimensional vector space V is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in V. By the similarity invariance, this determinant is independent of the choice of the basis for V and therefore only depends on the endomorphism T.
Square matrices over commutative rings
The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a commutative ring , such as the integers , as opposed to the field of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies still holds, as do all the properties that result from that characterization.
A matrix is invertible (in the sense that there is an inverse matrix whose entries are in ) if and only if its determinant is an invertible element in . For , this means that the determinant is +1 or −1. Such a matrix is called unimodular.
The determinant being multiplicative, it defines a group homomorphism
between the general linear group (the group of invertible -matrices with entries in ) and the multiplicative group of units in . Since it respects the multiplication in both groups, this map is a group homomorphism.
Given a ring homomorphism , there is a map given by replacing all entries in by their images under . The determinant respects these maps, i.e., the identity
holds. In other words, the displayed commutative diagram commutes.
For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors and . Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group,
Exterior algebra
The determinant of a linear transformation of an -dimensional vector space or, more generally a free module of (finite) rank over a commutative ring can be formulated in a coordinate-free manner by considering the -th exterior power of . The map induces a linear map
As is one-dimensional, the map is given by multiplying with some scalar, i.e., an element in . Some authors such as use this fact to define the determinant to be the element in satisfying the following identity (for all ):
This definition agrees with the more concrete coordinate-dependent definition. This can be shown using the uniqueness of a multilinear alternating form on -tuples of vectors in .
For this reason, the highest non-zero exterior power (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms with .
Generalizations and related notions
Determinants as treated above admit several variants: the permanent of a matrix is defined as the determinant, except that the factors occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group in Leibniz's rule.
Determinants for finite-dimensional algebras
For any associative algebra that is finite-dimensional as a vector space over a field , there is a determinant map
This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the matrix algebra , but also includes several further cases including the determinant of a quaternion,
,
the norm of a field extension, as well as the Pfaffian of a skew-symmetric matrix and the reduced norm of a central simple algebra, also arise as special cases of this construction.
Infinite matrices
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators.
The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula
Another infinite-dimensional notion of determinant is the functional determinant.
Operators in von Neumann algebras
For operators in a finite factor, one may define a positive real-valued determinant called the Fuglede−Kadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of Fuglede−Kadison determinant.
Related notions for non-commutative rings
For matrices over non-commutative rings, multilinearity and alternating properties are incompatible for , so there is no good definition of the determinant in this setting.
For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero with a regular element of R as value on some pair of arguments implies that R is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the q-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of -graded rings). Manin matrices form the class closest to matrices with commutative elements.
Calculation
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques. Computational geometry, however, does frequently use calculations related to determinants.
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating ( factorial) products for an -matrix. Thus, the number of required operations grows very quickly: it is of order . The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
Gaussian elimination
Gaussian elimination consists of left multiplying a matrix by elementary matrices for getting a matrix in a row echelon form. One can restrict the computation to elementary matrices of determinant . In this case, the determinant of the resulting row echelon form equals the determinant of the initial matrix. As a row echelon form is a triangular matrix, its determinant is the product of the entries of its diagonal.
So, the determinant can be computed for almost free from the result of a Gaussian elimination.
Decomposition methods
Some methods compute by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order , which is a significant improvement over .
For example, LU decomposition expresses as a product
of a permutation matrix (which has exactly a single in each column, and otherwise zeros), a lower triangular matrix and an upper triangular matrix .
The determinants of the two triangular matrices and can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of is just the sign of the corresponding permutation (which is for an even number of permutations and is for an odd number of permutations). Once such a LU decomposition is known for , its determinant is readily computed as
Further methods
The order reached by decomposition methods has been improved by different methods. If two matrices of order can be multiplied in time , where for some , then there is an algorithm computing the determinant in time . This means, for example, that an algorithm for computing the determinant exists based on the Coppersmith–Winograd algorithm. This exponent has been further lowered, as of 2016, to 2.373.
In addition to the complexity of the algorithm, further criteria can be used to compare algorithms.
Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called closed ordered walks, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule. Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order , but the bit length of intermediate values can become exponentially long. By comparison, the Bareiss Algorithm, is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times .
If the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of , where u and v are column vectors.
Charles Dodgson (i.e. Lewis Carroll of Alice's Adventures in Wonderland fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form.
See also
Cauchy determinant
Cayley–Menger determinant
Dieudonné determinant
Slater determinant
Determinantal conjecture
Notes
References
G. Baley Price (1947) "Some identities in the theory of determinants", American Mathematical Monthly 54:75–90
Historical references
Robert Forsyth Scott (1880): A Treatise on the Theory of Determinants and Their Applications in Analysis and Geometry, Cambridge University Press
E. R. Hedrick: On Three Dimensional Determinants, Annals of Mathematics, Vol.1, No.1/4 (1899-1900), pp.49-67 (19pages). https://doi.org/10.2307/1967268 # Note: This is not the ordinal determinant.
External links
Determinant Interactive Program and Tutorial
Linear algebra: determinants. Compute determinants of matrices up to order 6 using Laplace expansion you choose.
Determinant Calculator Calculator for matrix determinants, up to the 8th order.
Matrices and Linear Algebra on the Earliest Uses Pages
Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course.
Matrix theory
Linear algebra
Homogeneous polynomials | Determinant | [
"Mathematics"
] | 9,607 | [
"Linear algebra",
"Algebra"
] |
8,471 | https://en.wikipedia.org/wiki/Delphinus | Delphinus is a small constellation in the Northern Celestial Hemisphere, close to the celestial equator. Its name is the Latin version for the Greek word for dolphin (). It is one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations recognized by the International Astronomical Union. It is one of the smaller constellations, ranked 69th in size. Delphinus' five brightest stars form a distinctive asterism symbolizing a dolphin with four stars representing the body and one the tail. It is bordered (clockwise from north) by Vulpecula, Sagitta, Aquila, Aquarius, Equuleus and Pegasus.
Delphinus is a faint constellation with only two stars brighter than an apparent magnitude of 4, Beta Delphini (Rotanev) at magnitude 3.6 and Alpha Delphini (Sualocin) at magnitude 3.8.
Mythology
Delphinus is associated with two stories from Greek mythology.
According to myth, the first Greek god Poseidon wanted to marry Amphitrite, a beautiful nereid. However, wanting to protect her virginity, she fled to the Atlas mountains. Her suitor then sent out several searchers, among them a certain Delphinus. Delphinus accidentally stumbled upon her and was able to persuade Amphitrite to accept Poseidon's wooing. Out of gratitude the god placed the image of a dolphin among the stars.
The second story tells of the Greek poet Arion of Lesbos (7th century BC), who was saved by a dolphin. He was a court musician at the palace of Periander, ruler of Corinth. Arion had amassed a fortune during his travels to Sicily and Italy. On his way home from Tarentum his wealth caused the crew of his ship to conspire against him. Threatened with death, Arion asked to be granted a last wish which the crew granted: he wanted to sing a dirge. This he did, and while doing so, flung himself into the sea. There, he was rescued by a dolphin which had been charmed by Arion's music. The dolphin carried Arion to the coast of Greece and left.
In non-Western astronomy
In Chinese astronomy, the stars of Delphinus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ).
In Polynesia, two cultures recognized Delphinus as a constellation. In Pukapuka, it was called Te Toloa and in the Tuamotus, it was called Te Uru-o-tiki.
In Hindu astrology, the Delphinus corresponds to the Nakshatra, or lunar mansion, of Dhanishta.
Characteristics
Delphinus is bordered by Vulpecula to the north, Sagitta to the northwest, Aquila to the west and southwest, Aquarius to the southeast, Equuleus to the east and Pegasus to the east. Covering 188.5 square degrees, corresponding to 0.457% of the sky, it ranks 69th of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Del". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The whole constellation is visible to observers north of latitude 69°S.
Features
Stars
Delphinus has two stars above fourth (apparent) magnitude; its brightest star is of magnitude 3.6. The main asterism in Delphinus is Job's Coffin, nearly a 45°-apex lozenge or diamond of the four brightest stars: Alpha, Beta, Gamma, and Delta Delphini. Delphinus is in a rich Milky Way star field. Alpha and Beta Delphini have 19th century names Sualocin and Rotanev, read backwards: Nicolaus Venator, the Latinized name of a Palermo Observatory director, Niccolò Cacciatore (d. 1841).
Alpha Delphini is a blue-white hued main sequence star of magnitude 3.8, 241 light-years from Earth. It is a spectroscopic binary. It is officially named Sualocin. The star has an absolute magnitude of -0.4.
Beta Delphini is officially called Rotanev. It was found to be a binary star in 1873. The gap between its close binary stars is visible from large amateur telescopes. To the unaided eye, it appears to be a white star of magnitude 3.6. It has a period of 27 years and is 97 light-years from Earth.
Gamma Delphini is a celebrated binary star among amateur astronomers. The primary is orange-gold of magnitude 4.3; the secondary is a light yellow star of magnitude 5.1. The pair form a true binary with an estimated orbital period of over 3,000 years. 125 light-years away, the two components are visible in a small amateur telescope. The secondary, also described as green, is 10 arcseconds from the primary. Struve 2725, called the "Ghost Double", is a pair that appears similar but dimmer. Its components of magnitudes 7.6 and 8.4 are separated by 6 arcseconds and are 15 arcminutes from Gamma Delphini itself. An unconfirmed exoplanet with a minimum mass of 0.7 Jupiter masses may orbit one of the stars.
Delta Delphini is a type A-type star of magnitude 4.43. It is a spectroscopic binary, and both stars are Delta Scuti variables.
Epsilon Delphini, Deneb Dulfim (lit. "tail [of the] Dolphin"), or Aldulfin, is a star of stellar class B6 III. Its magnitude is variable at around 4.03.
Zeta Delphini, an A3Va main-sequence star of magnitude 4.6, was in 2014 discovered to have a brown dwarf orbiting around it. Zeta Delphini B has a mass of 50±15 .
Rho Aquilae at magnitude 4.94 is at about 150 light-years away. Due to its proper motion it has been in the (round-figure parameter) bounds of the constellation since 1992. It is an A-type main sequence star with a lower metallicity than the Sun.
HR Delphini was a nova that brightened to magnitude 3.5 in December 1967. It took an unusually long time for the nova to reach peak brightness which indicate that it barely satisfied the conditions for a thermonuclear runaway. Another nova by the name V339 Delphini was detected in 2013; it peaked at magnitude 4.3 and was the first nova observed to produce lithium.
Musica, also known by its Flamsteed designation 18 Delphini, is one of the five stars with known planets located in Delphinus. It has a spectral type of G6 III. Arion, the planet, is a very dense and massive planet with a mass at least 10.3 times greater than Jupiter. Arion was part of the first NameExoWorlds contest where the public got the opportunity to suggest names for exoplanets and their host stars.
Exoplanets
In 2024 the planet TOI-6883 b was discovered in the constellation Delphinus. It has a 16.249 day orbital period around its host star, a radius 1.08 times Jupiter's, and a mass 4.34 times Jupiter's. It was discovered from a single transit in TESS data and it was confirmed by a network of citizen scientists.
In 2024, the planet TOI-6883 c was discovered in the constellation Delphinus. It has an orbital period of 7.8458 days, a radius of 0.7 times Jupiter's, and a third of Jupiter's mass. The Neptunian-size planet was discovered from an abnormality from data retrieved from TOI-6883 c.
Deep-sky objects
Its rich Milky Way star field means many modestly deep-sky objects. NGC 6891 is a planetary nebula of magnitude 10.5; another is NGC 6905 or the Blue Flash Nebula. The Blue Flash Nebula shows broad emission lines. The central star in NGC 6905 has a spectral of WO2, meaning it is rich in oxygen.
NGC 6934 is a globular cluster of magnitude 9.75. It is about 52,000 light-years away from the Solar System. It is in the Shapley-Sawyer Concentration Class VIII and is thought to share a common origin with another globular cluster in Boötes. It has an intermediate metallicity for a globular cluster, but as of 2018 it has been poorly studied. At a distance of about 137,000 light-years, the globular cluster NGC 7006 is at the outer reaches of the galaxy. It is also fairly dim at magnitude 11.5 and is in Class I.
See also
Delphinus (Chinese astronomy)
Notes
Citations
References
Princeton University Press, Princeton. .
University of Wisconsin, "Delphinus"
External links
The Deep Photographic Guide to the Constellations: Delphinus
The clickable Delphinus
Star Tales – Delphinus
Warburg Institute Iconographic Database (medieval and early modern images of Delphinus)
Constellations
Northern constellations
Constellations listed by Ptolemy
Legendary mammals
Articles containing video clips | Delphinus | [
"Astronomy"
] | 1,983 | [
"Constellations listed by Ptolemy",
"Delphinus",
"Constellations",
"Northern constellations",
"Sky regions"
] |
8,472 | https://en.wikipedia.org/wiki/Disk%20storage | Disk storage (also sometimes called drive storage) is a data storage mechanism based on a rotating disk. The recording employs various electronic, magnetic, optical, or mechanical changes to the disk's surface layer. A disk drive is a device implementing such a storage mechanism. Notable types are hard disk drives (HDD), containing one or more non-removable rigid platters; the floppy disk drive (FDD) and its removable floppy disk; and various optical disc drives (ODD) and associated optical disc media.
(The spelling disk and disc are used interchangeably except where trademarks preclude one usage, e.g., the Compact Disc logo. The choice of a particular form is frequently historical, as in IBM's usage of the disk form beginning in 1956 with the "IBM 350 disk storage unit".)
Background
Audio information was originally recorded by analog methods (see Sound recording and reproduction). Similarly the first video disc used an analog recording method. In the music industry, analog recording has been mostly replaced by digital optical technology where the data is recorded in a digital format with optical information.
The first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the already used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage; however, the total cost of ownership of data on disk including power and management remains larger than that of tape.
Disk storage is now used in both computer storage and consumer electronic storage, e.g., audio CDs and video discs (VCD, DVD and Blu-ray).
Data on modern disks is stored in fixed length blocks, usually called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is simply the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records; record length could vary on and between disks. Capacity decreased as record length decreased due to the necessary gaps between blocks.
Access methods
Digital disk drives are block storage devices. Each disk is divided into logical blocks (collection of sectors). Blocks are addressed using their logical block addresses (LBA). Read from or write to disk happens at the granularity of blocks.
Originally the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders, heads, and sectors. The sector unit is the smallest size of data to be stored in a hard disk drive, and each file will have many sector units assigned to it. The smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples (two bytes × two channels × six samples = 24 bytes). The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display.
The information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is then sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself. The data is then passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will typically store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track, and the compressed information is then sent down to the head, which changes the physical properties, optically or magnetically, for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner; rather, it is held in the best way for quickest retrieval.
Rotation speed and track layout
Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device. The other is the side-to-side motion of the head across the disk as it moves between tracks.
There are two types of disk rotation methods:
constant linear velocity (used mainly in optical storage) varies the rotational speed of the optical disc depending upon the position of the head, and
constant angular velocity (used in HDDs, standard FDDs, a few optical disc systems, and vinyl audio records) spins the media at one constant speed regardless of where the head is positioned.
Track positioning also follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g., HDDs, FDDs, and Iomega zip drives, use concentric tracks to store data. During a sequential read or write operation, after the drive accesses all the sectors in a track, it repositions the head(s) to the next track. This will cause a momentary delay in the flow of data between the device and the computer. In contrast, optical audio and video discs use a single spiral track that starts at the innermost point on the disc and flows continuously to the outer edge. When reading or writing data, there is no need to stop the flow of data to switch tracks. This is similar to vinyl records, except vinyl records started at the outer edge and spiraled in toward the center.
Interfaces
The disk drive interface is the mechanism/protocol of communication between the rest of the system and the disk drive itself. Storage devices intended for desktop and mobile computers typically use ATA (PATA) and SATA interfaces. Enterprise systems and high-end storage devices will typically use SCSI, SAS, and FC interfaces in addition to some use of SATA.
Basic terminology
Disk Generally refers to magnetic media and devices.
Disc Required by trademarks for certain optical media and devices.
Platter An individual recording disk. A hard disk drive contains a set of platters. Developments in optical technology have led to multiple recording layers on DVDs.
Spindle the spinning axle on which the platters are mounted.
Rotation Platters rotate; two techniques are common:
Constant angular velocity (CAV) keeps the disk spinning at a fixed rate, measured in revolutions per minute (RPM). This means the heads cover more distance per unit of time on the outer tracks than on the inner tracks. This method is typical with computer hard drives.
Constant linear velocity (CLV) keeps the distance covered by the heads per unit time fixed. Thus the disk has to slow down as the arm moves to the outer tracks. This method is typical for CD drives.
Track The circle of recorded data on a single recording surface of a platter.
Sector A segment of a track
Low level formatting Establishing the tracks and sectors.
Head The device that reads and writes the information—magnetic or optical—on the disk surface.
Arm The mechanical assembly that supports the head as it moves in and out.
Seek time Time needed to move the head to a new position (specific track).
Rotational latency Average time, once the arm is on the right track, before a head is over a desired sector.
Data transfer rate The rate at which user data bits are transferred from or to the medium. Technically, this would more accurately be entitled the "gross" data transfer rate.
See also
Disk array
Disk drive performance characteristics
Disk read-and-write head
Magnetic storage
RAID
USB flash drive
References
Computer storage devices
Rotating disc computer storage media | Disk storage | [
"Technology"
] | 1,623 | [
"Computer storage devices",
"Recording devices"
] |
8,488 | https://en.wikipedia.org/wiki/DTMF | Dual-tone multi-frequency signaling (DTMF) is a telecommunication signaling system using the voice-frequency band over telephone lines between telephone equipment and other communications devices and switching centers. DTMF was first developed in the Bell System in the United States, and became known under the trademark Touch-Tone for use in push-button telephones supplied to telephone customers, starting in 1963. DTMF is standardized as ITU-T Recommendation Q.23. It is also known in the UK as MF4.
Touch-tone dialing with a telephone keypad gradually replaced the use of rotary dials and has become the industry standard in telephony to control automated equipment and signal user intent. Other multi-frequency systems are also used for signaling on trunks in the telephone network.
Multifrequency signaling
Before the development of DTMF, telephone numbers were dialed by users with a loop-disconnect (LD) signaling, more commonly known as pulse dialing (dial pulse, DP) in the United States. It functions by interrupting the current in the local loop between the telephone exchange and the calling party's telephone at a precise rate with a switch in the telephone that is operated by the rotary dial as it spins back to its rest position after having been rotated to each desired number. The exchange equipment responds to the dial pulses either directly by operating relays or by storing the number in a digit register that records the dialed number. The physical distance for which this type of dialing was possible was restricted by electrical distortions and was possible only on direct metallic links between end points of a line. Placing calls over longer distances required either operator assistance or provision of special subscriber trunk dialing equipment. Operators used an earlier type of multi-frequency signaling.
Multi-frequency signaling (MF) is a group of signaling methods that use a mixture of two pure tone (pure sine wave) sounds. Various MF signaling protocols were devised by the Bell System and CCITT. The earliest of these were for in-band signaling between switching centers, where long-distance telephone operators used a 16-digit keypad to input the next portion of the destination telephone number in order to contact the next downstream long-distance telephone operator. This semi-automated signaling and switching proved successful in both speed and cost effectiveness. Based on this prior success with using MF by specialists to establish long-distance telephone calls, dual-tone multi-frequency signaling was developed for end-user signaling without the assistance of operators.
The DTMF system uses a set of eight audio frequencies transmitted in pairs to represent 16 signals, represented by the ten digits, the letters A to D, and the symbols # and *. As the signals are audible tones in the voice frequency range, they can be transmitted through electrical repeaters and amplifiers, and over radio and microwave links, thus eliminating the need for intermediate operators on long-distance circuits.
AT&T described the product as "a method for pushbutton signaling from customer stations using the voice transmission path". In order to prevent consumer telephones from interfering with the MF-based routing and switching between telephone switching centers, DTMF frequencies differ from all of the pre-existing MF signaling protocols between switching centers: MF/R1, R2, CCS4, CCS5, and others that were later replaced by SS7 digital signaling. DTMF was known throughout the Bell System by the trademark Touch-Tone. The term was first used by AT&T in commerce on July 5, 1960, and was introduced to the public on November 18, 1963, when the first push-button telephone was made available to the public. As a parent company of Bell Systems, AT&T held the trademark from September 4, 1962, to March 13, 1984. It is standardized by ITU-T Recommendation Q.23. In the UK, it is also known as MF4.
Other vendors of compatible telephone equipment called the Touch-Tone feature tone dialing or DTMF. Automatic Electric (GTE) referred to it as "Touch-calling" in their marketing. Other trade names such as Digitone were used by the Northern Electric Company in Canada.
As a method of in-band signaling, DTMF signals were also used by cable television broadcasters as cue tones to indicate the start and stop times of local commercial insertion points during station breaks for the benefit of cable companies. Until out-of-band signaling equipment was developed in the 1990s, fast, unacknowledged DTMF tone sequences could be heard during the commercial breaks of cable channels in the United States and elsewhere. Previously, terrestrial television stations used DTMF tones to control remote transmitters. In IP telephony, DTMF signals can also be delivered as either in-band or out-of-band tones, or even as a part of signaling protocols, as long as both endpoints agree on a common approach to adopt.
Keypad
The DTMF telephone keypad is laid out as a matrix of push buttons in which each row represents the low frequency component and each column represents the high frequency component of the DTMF signal. The commonly used keypad has four rows and three columns, but a fourth column is present for some applications. Pressing a key sends a combination of the row and column frequencies. For example, the 1 key produces a superimposition of a 697 Hz low tone and a 1209 Hz high tone. Initial pushbutton designs employed levers, enabling each button to activate one row and one column contact. The tones are decoded by the switching center to determine the keys pressed by the user.
#, *, A, B, C, and D
Engineers had envisioned telephones being used to access computers and automated response systems. They consulted with companies to determine the requirements. This led to the addition of the number sign (#, ''pound'' or "diamond" in this context, "hash", "square" or "gate" in the UK, and "octothorpe'' by the original engineers) and asterisk or "star" (*) keys as well as a group of keys for menu selection: A, B, C and D. In the end, the lettered keys were dropped from most keypads and it was many years before the two symbol keys became widely used for vertical service codes such as *67 in the United States and Canada to suppress caller ID.
Public payphones that accept credit cards use these additional codes to send the information from the magnetic strip.
The AUTOVON telephone system of the United States Armed Forces used signals A, B, C, and D to assert certain privilege and priority levels when placing telephone calls. Precedence is still a feature of military telephone networks, but using number combinations. For example, entering 93 before a number is a priority call.
Present-day uses of the signals A, B, C and D are rare in telephone networks, and are exclusive to network control. For example, A is used in some networks for cycling through a list of carriers. The signals are used in radio phone patch and repeater operations to allow, among other uses, control of the repeater while connected to an active telephone line.
The signals *, #, A, B, C and D are still widely used worldwide by amateur radio operators and commercial two-way radio systems for equipment control, repeater control, remote-base operations and some telephone communications systems.
DTMF signaling tones may also be heard at the start and/or end of some prerecorded VHS videocassettes. Information on the master version of the video tape is encoded in the DTMF tones. The encoded tones provide information to automatic duplication machines, such as format, duration and volume levels in order to replicate the original video as closely as possible.
DTMF tones are used in some caller ID systems to transfer the caller ID information, a function that is performed in the United States by Bell 202 modulated frequency-shift keying (FSK) signaling.
Decoding
DTMF was originally decoded by tuned filter banks. By the end of the 20th century, digital signal processing became the predominant technology for decoding. DTMF decoding algorithms typically use the Goertzel algorithm although application of MUSIC (algorithm) to DTMF decoding has been shown to outperform Goertzel and being the only possibility in cases when number of available samples is limited. As DTMF signaling is often transmitted in-band with voice or other audio signals present simultaneously, the DTMF signal definition includes strict limits for timing (minimum duration and interdigit spacing), frequency deviations, harmonics, and amplitude relation of the two components with respect to each other (twist).
Other multiple frequency signals
National telephone systems define other tones, outside the DTMF specification, that indicate the status of lines, equipment, or the result of calls, and for control of equipment for troubleshooting or service purposes. Such call-progress tones are often also composed of multiple frequencies and are standardized in each country. The Bell System defined them in the Precise Tone Plan. Bell's Multi-frequency signaling was exploited by blue box devices.
Some early modems were based on touch-tone frequencies,
such as Bell 400-style modems.
See also
In-band signaling
Selective calling
Special information tone
Cue tone
References
Further reading
ITU's recommendations for implementing DTMF services
.
Frank Durda, Dual Tone Multi-Frequency (Touch-Tone) Reference, 2006.
ITU-T Recommendation Q.24 - Multifrequency push-button signal reception
Telephony signals
Broadcast engineering | DTMF | [
"Engineering"
] | 1,974 | [
"Broadcast engineering",
"Electronic engineering"
] |
8,492 | https://en.wikipedia.org/wiki/Discrete%20mathematics | Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics".
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect.
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Topics
Theoretical computer science
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
Logic
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has accumulated to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic.
Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Combinatorics
Combinatorics studies the ways in which discrete structures can be combined or arranged.
Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics.
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties.
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field.
Order theory is the study of partially ordered sets, both finite and infinite.
Graph theory
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Number theory
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
Algebraic structures
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: Boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Discrete analogues of continuous mathematics
There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures.
Calculus of finite differences, discrete analysis, and discrete calculus
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces.
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
Discrete geometry
Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form for a field can be studied either as , a point, or as the spectrum of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Discrete modelling
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation. Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Challenges
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. The telecommunications industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
See also
Outline of discrete mathematics
Cyberchase, a show that teaches discrete mathematics to children
References
Further reading
External links
Discrete mathematics at the utk.edu Mathematics Archives, providing links to syllabi, tutorials, programs, etc.
Iowa Central: Electrical Technologies Program Discrete mathematics for Electrical engineering. | Discrete mathematics | [
"Mathematics"
] | 2,706 | [
"Discrete mathematics"
] |
8,494 | https://en.wikipedia.org/wiki/DDT | Dichlorodiphenyltrichloroethane, commonly known as DDT, is a colorless, tasteless, and almost odorless crystalline chemical compound, an organochloride. Originally developed as an insecticide, it became infamous for its environmental impacts. DDT was first synthesized in 1874 by the Austrian chemist Othmar Zeidler. DDT's insecticidal action was discovered by the Swiss chemist Paul Hermann Müller in 1939. DDT was used in the second half of World War II to limit the spread of the insect-borne diseases malaria and typhus among civilians and troops. Müller was awarded the Nobel Prize in Physiology or Medicine in 1948 "for his discovery of the high efficiency of DDT as a contact poison against several arthropods". The WHO's anti-malaria campaign of the 1950s and 1960s relied heavily on DDT and the results were promising, though there was a resurgence in developing countries afterwards.
By October 1945, DDT was available for public sale in the United States. Although it was promoted by government and industry for use as an agricultural and household pesticide, there were also concerns about its use from the beginning. Opposition to DDT was focused by the 1962 publication of Rachel Carson's book Silent Spring. It talked about environmental impacts that correlated with the widespread use of DDT in agriculture in the United States, and it questioned the logic of broadcasting potentially dangerous chemicals into the environment with little prior investigation of their environmental and health effects. The book cited claims that DDT and other pesticides caused cancer and that their agricultural use was a threat to wildlife, particularly birds. Although Carson never directly called for an outright ban on the use of DDT, its publication was a seminal event for the environmental movement and resulted in a large public outcry that eventually led, in 1972, to a ban on DDT's agricultural use in the United States. Along with the passage of the Endangered Species Act, the United States ban on DDT is a major factor in the comeback of the bald eagle (the national bird of the United States) and the peregrine falcon from near-extinction in the contiguous United States.
The evolution of DDT resistance and the harm both to humans and the environment led many governments to curtail DDT use. A worldwide ban on agricultural use was formalized under the Stockholm Convention on Persistent Organic Pollutants, which has been in effect since 2004. Recognizing that total elimination in many malaria-prone countries is currently unfeasible in the absence of affordable/effective alternatives for disease control, the convention exempts public health use within World Health Organization (WHO) guidelines from the ban.
DDT still has limited use in disease vector control because of its effectiveness in killing mosquitos and thus reducing malarial infections, but that use is controversial due to environmental and health concerns. DDT is one of many tools to fight malaria, which remains the primary public health challenge in many countries. WHO guidelines require that absence of DDT resistance must be confirmed before using it. Resistance is largely due to agricultural use, in much greater quantities than required for disease prevention.
Properties and chemistry
DDT is similar in structure to the insecticide methoxychlor and the acaricide dicofol. It is highly hydrophobic and nearly insoluble in water but has good solubility in most organic solvents, fats and oils. DDT does not occur naturally and is synthesised by consecutive Friedel–Crafts reactions between chloral () and two equivalents of chlorobenzene (), in the presence of an acidic catalyst. DDT has been marketed under trade names including Anofex, Cezarex, Chlorophenothane, Dicophane, Dinocide, Gesarol, Guesapon, Guesarol, Gyron, Ixodex, Neocid, Neocidol and Zerdane; INN is clofenotane.
Isomers and related compounds
Commercial DDT is a mixture of several closely related compounds. Due to the nature of the chemical reaction used to synthesize DDT, several combinations of ortho and para arene substitution patterns are formed. The major component (77%) is the desired p,p isomer. The o,p isomeric impurity is also present in significant amounts (15%). Dichlorodiphenyldichloroethylene (DDE) and dichlorodiphenyldichloroethane (DDD) make up the balance of impurities in commercial samples. DDE and DDD are also the major metabolites and environmental breakdown products. DDT, DDE and DDD are sometimes referred to collectively as DDX.
Production and use
DDT has been formulated in multiple forms, including solutions in xylene or petroleum distillates, emulsifiable concentrates, water-wettable powders, granules, aerosols, smoke candles and charges for vaporizers and lotions.
From 1950 to 1980, DDT was extensively used in agriculturemore than 40,000 tonnes each year worldwideand it has been estimated that a total of 1.8 million tonnes have been produced globally since the 1940s. In the United States, it was manufactured by some 15 companies, including Monsanto, Ciba, Montrose Chemical Company, Pennwalt, and Velsicol Chemical Corporation. Production peaked in 1963 at 82,000 tonnes per year. More than 600,000 tonnes (1.35 billion pounds) were applied in the US before the 1972 ban. Usage peaked in 1959 at about 36,000 tonnes.
China ceased production in 2007, leaving India the only country still manufacturing DDT; it is the largest consumer. In 2009, 3,314 tonnes were produced for malaria control and visceral leishmaniasis. In recent years, in addition to India, just seven other countries, all in Africa, are still using DDT.
Mechanism of insecticide action
In insects, DDT opens voltage-sensitive sodium ion channels in neurons, causing them to fire spontaneously, which leads to spasms and eventual death. Insects with certain mutations in their sodium channel gene are resistant to DDT and similar insecticides. DDT resistance is also conferred by up-regulation of genes expressing cytochrome P450 in some insect species, as greater quantities of some enzymes of this group accelerate the toxin's metabolism into inactive metabolites. Genomic studies in the model genetic organism Drosophila melanogaster revealed that high level DDT resistance is polygenic, involving multiple resistance mechanisms. In the absence of genetic adaptation, Roberts and Andre 1994 find behavioral avoidance nonetheless provides insects with some protection against DDT. The M918T mutation event produces dramatic kdr for pyrethroids but Usherwood et al. 2005 find it is entirely ineffective against DDT. Scott 2019 believes this test in Drosophila oocytes holds for oocytes in general.
History
DDT was first synthesized in 1874 by Othmar Zeidler under the supervision of Adolf von Baeyer. It was further described in 1929 in a dissertation by W. Bausch and in two subsequent publications in 1930. The insecticide properties of "multiple chlorinated aliphatic or fat-aromatic alcohols with at least one trichloromethane group" were described in a patent in 1934 by Wolfgang von Leuthold. DDT's insecticidal properties were not, however, discovered until 1939 by the Swiss scientist Paul Hermann Müller, who was awarded the 1948 Nobel Prize in Physiology and Medicine for his efforts.
Use in the 1940s and 1950s
DDT is the best-known of several chlorine-containing pesticides used in the 1940s and 1950s. During this time, the use of DDT was driven by protecting American soldiers from diseases in tropical areas. Both British and American scientists hoped to use it to control spread of malaria, typhus, dysentery, and typhoid fever among overseas soldiers, especially considering that the pyrethrum was harder to access since it came mainly from Japan. Due to the potency of DDT, it was not long before America's War Production Board placed it on military supply lists in 1942 and 1943 and encouraged its production for overseas use. Enthusiasm regarding DDT became obvious through the American government's advertising campaigns of posters depicting Americans fighting the Axis powers and insects and through media publications celebrating its military uses. In the South Pacific, it was sprayed aerially for malaria and dengue fever control with spectacular effects. While DDT's chemical and insecticidal properties were important factors in these victories, advances in application equipment coupled with competent organization and sufficient manpower were also crucial to the success of these programs.
In 1945, DDT was made available to farmers as an agricultural insecticide and played a role in the elimination of malaria in Europe and North America. Despite concerns emerging in the scientific community, and lack of research, the FDA considered it safe up to 7 parts per million in food. There was a large economic incentive to push DDT into the market and sell it to farmers, governments, and individuals to control diseases and increase food production.
DDT was also a way for American influence to reach abroad through DDT-spraying campaigns. In the 1944 issue of Life magazine there was a feature regarding the Italian program showing pictures of American public health officials in uniforms spraying DDT on Italian families.
In 1955, the World Health Organization commenced a program to eradicate malaria in countries with low to moderate transmission rates worldwide, relying largely on DDT for mosquito control and rapid diagnosis and treatment to reduce transmission. The program eliminated the disease in "North America, Europe, the former Soviet Union", and in "Taiwan, much of the Caribbean, the Balkans, parts of northern Africa, the northern region of Australia, and a large swath of the South Pacific" and dramatically reduced mortality in Sri Lanka and India.
However, failure to sustain the program, increasing mosquito tolerance to DDT, and increasing parasite tolerance led to a resurgence. In many areas early successes partially or completely reversed, and in some cases rates of transmission increased. The program succeeded in eliminating malaria only in areas with "high socio-economic status, well-organized healthcare systems, and relatively less intensive or seasonal malaria transmission".
DDT was less effective in tropical regions due to the continuous life cycle of mosquitoes and poor infrastructure. It was applied in sub-Saharan Africa by various colonial states, but the 'global' WHO eradication program didn't include the region. Mortality rates in that area never declined to the same dramatic extent, and now constitute the bulk of malarial deaths worldwide, especially following the disease's resurgence as a result of resistance to drug treatments and the spread of the deadly malarial variant caused by Plasmodium falciparum. Eradication was abandoned in 1969 and attention instead focused on controlling and treating the disease. Spraying programs (especially using DDT) were curtailed due to concerns over safety and environmental effects, as well as problems in administrative, managerial and financial implementation. Efforts shifted from spraying to the use of bednets impregnated with insecticides and other interventions.
United States ban
By October 1945, DDT was available for public sale in the United States, used both as an agricultural pesticide and as a household insecticide. Although its use was promoted by government and the agricultural industry, US scientists such as FDA pharmacologist Herbert O. Calvery expressed concern over possible hazards associated with DDT as early as 1944. In 1947, Bradbury Robinson, a physician and nutritionist practicing in St. Louis, Michigan, warned of the dangers of using the pesticide DDT in agriculture. DDT had been researched and manufactured in St. Louis by the Michigan Chemical Corporation, later purchased by Velsicol Chemical Corporation, and had become an important part of the local economy. Citing research performed by Michigan State University in 1946, Robinson, a past president of the local Conservation Club, opined that:
As its production and use increased, public response was mixed. At the same time that DDT was hailed as part of the "world of tomorrow", concerns were expressed about its potential to kill harmless and beneficial insects (particularly pollinators), birds, fish, and eventually humans. The issue of toxicity was complicated, partly because DDT's effects varied from species to species, and partly because consecutive exposures could accumulate, causing damage comparable to large doses. A number of states attempted to regulate DDT. In the 1950s the federal government began tightening regulations governing its use. These events received little attention. Women like Dorothy Colson and Mamie Ella Plyler of Claxton, Georgia, gathered evidence about DDT's effects and wrote to the Georgia Department of Public Health, the National Health Council in New York City, and other organizations.
In 1957 The New York Times reported an unsuccessful struggle to restrict DDT use in Nassau County, New York, and the issue came to the attention of the popular naturalist-author Rachel Carson when a friend, Olga Huckins, wrote to her including an article she had written in the Boston Globe about the devastation of her local bird population after DDT spraying. William Shawn, editor of The New Yorker, urged her to write a piece on the subject, which developed into her 1962 book Silent Spring. The book argued that pesticides, including DDT, were poisoning both wildlife and the environment and were endangering human health. Silent Spring was a best seller, and public reaction to it launched the modern environmental movement in the United States. The year after it appeared, President John F. Kennedy ordered his Science Advisory Committee to investigate Carson's claims. The committee's report "add[ed] up to a fairly thorough-going vindication of Rachel Carson's Silent Spring thesis", in the words of the journal Science, and recommended a phaseout of "persistent toxic pesticides". In 1965, the U.S. military removed DDT from the military supply system due in part to the development of resistance by body lice to DDT; it was replaced by lindane.
In the mid-1960s, DDT became a prime target of the burgeoning environmental movement, as concern about DDT and its effects began to rise in local communities. In 1966, a fish kill in Suffolk County, NY, was linked to a 5,000-gallon DDT dump by the county's mosquito commission, leading a group of scientists and lawyers to file a lawsuit to stop the county's further use of DDT. A year later, the group, led by Victor Yannacone and Charles Wurster, founded the Environmental Defense Fund (EDF), along with scientists Art Cooley and Dennis Puleston, and brought a string of lawsuits against DDT and other persistent pesticides in Michigan and Wisconsin.
Around the same time, evidence was mounting further about DDT causing catastrophic declines in wildlife reproduction, especially in birds of prey like peregrine falcons, bald eagles, ospreys, and brown pelicans, whose eggshells became so thin that they often cracked before hatching. Toxicologists like David Peakall were measuring DDE levels in the eggs of peregrine falcons and California condors and finding that increased levels corresponded with thinner shells. Compounding the effect was DDT’s persistence in the environment, as it was unable to dissolve in water, and ended up accumulating in animal fat and disrupting hormone metabolism across a wide range of species.
In response to an EDF suit, the U.S. District Court of Appeals in 1971 ordered the EPA to begin the de-registration procedure for DDT. After an initial six-month review process, William Ruckelshaus, the Agency's first Administrator rejected an immediate suspension of DDT's registration, citing studies from the EPA's internal staff stating that DDT was not an imminent danger. However, these findings were criticized, as they were performed mostly by economic entomologists inherited from the United States Department of Agriculture, who many environmentalists felt were biased towards agribusiness and understated concerns about human health and wildlife. The decision thus created controversy.
The EPA held seven months of hearings in 1971–1972, with scientists giving evidence for and against DDT. In the summer of 1972, Ruckelshaus announced the cancellation of most uses of DDT exempting public health uses under some conditions. Again, this caused controversy. Immediately after the announcement, both the EDF and the DDT manufacturers filed suit against EPA. Many in the agricultural community were concerned that food production would be severely impacted, while proponents of pesticides warned of increased breakouts of insect-borne diseases and questioned the accuracy of giving animals high amounts of pesticides for cancer potential. Industry sought to overturn the ban, while the EDF wanted a comprehensive ban. The cases were consolidated, and in 1973 the United States Court of Appeals for the District of Columbia Circuit ruled that the EPA had acted properly in banning DDT. During the late 1970s, the EPA also began banning organochlorines, pesticides that were chemically similar to DDT. These included aldrin, dieldrin, chlordane, heptachlor, toxaphene, and mirex.
Some uses of DDT continued under the public health exemption. For example, in June 1979, the California Department of Health Services was permitted to use DDT to suppress flea vectors of bubonic plague. DDT continued to be produced in the United States for foreign markets until 1985, when over 300 tons were exported.
International usage restrictions
In the 1970s and 1980s, agricultural use was banned in most developed countries, beginning with Hungary in 1968 although in practice it continued to be used through at least 1970. This was followed by Norway and Sweden in 1970, West Germany and the United States in 1972, but not in the United Kingdom until 1984.
In contrast to West Germany, in the German Democratic Republic DDT was used until 1988. Especially of relevance were large-scale applications in forestry in the years 1982–1984, with the aim to combat bark beetle and pine moth. As a consequence, DDT-concentrations in eastern German forest soils are still significantly higher compared to soils in the former western German states.
By 1991, total bans, including for disease control, were in place in at least 26 countries; for example, Cuba in 1970, the US in the 1980s, Singapore in 1984, Chile in 1985, and the Republic of Korea in 1986.
The Stockholm Convention on Persistent Organic Pollutants, which took effect in 2004, put a global ban on several persistent organic pollutants, and restricted DDT use to vector control. The convention was ratified by more than 170 countries. Recognizing that total elimination in many malaria-prone countries is currently unfeasible in the absence of affordable/effective alternatives, the convention exempts public health use within World Health Organization (WHO) guidelines from the ban. Resolution 60.18 of the World Health Assembly commits WHO to the Stockholm Convention's aim of reducing and ultimately eliminating DDT. Malaria Foundation International states, "The outcome of the treaty is arguably better than the status quo going into the negotiations. For the first time, there is now an insecticide which is restricted to vector control only, meaning that the selection of resistant mosquitoes will be slower than before."
Despite the worldwide ban, agricultural use continued in India, North Korea, and possibly elsewhere. As of 2013, an estimated 3,000 to 4,000 tons of DDT were produced for disease vector control, including 2,786 tons in India. DDT is applied to the inside walls of homes to kill or repel mosquitoes. This intervention, called indoor residual spraying (IRS), greatly reduces environmental damage. It also reduces the incidence of DDT resistance. For comparison, treating of cotton during a typical U.S. growing season requires the same amount of chemical to treat roughly 1,700 homes.
Environmental impact
DDT is a persistent organic pollutant that is readily adsorbed to soils and sediments, which can act both as sinks and as long-term sources of exposure affecting organisms. Depending on environmental conditions, its soil half-life can range from 22 days to 30 years. Routes of loss and degradation include runoff, volatilization, photolysis and aerobic and anaerobic biodegradation. Due to hydrophobic properties, in aquatic ecosystems DDT and its metabolites are absorbed by aquatic organisms and adsorbed on suspended particles, leaving little DDT dissolved in the water (however, its half-life in aquatic environments is listed by the National Pesticide Information Center as 150 years). Its breakdown products and metabolites, DDE and DDD, are also persistent and have similar chemical and physical properties. DDT and its breakdown products are transported from warmer areas to the Arctic by the phenomenon of global distillation, where they then accumulate in the region's food web.
Medical researchers in 1974 found a measurable and significant difference in the presence of DDT in human milk between mothers who lived in New Brunswick and mothers who lived in Nova Scotia, "possibly because of the wider use of insecticide sprays in the past".
Because of its lipophilic properties, DDT can bioaccumulate, especially in predatory birds. DDT is toxic to a wide range of living organisms, including marine animals such as crayfish, daphnids, sea shrimp and many species of fish. DDT, DDE and DDD magnify through the food chain, with apex predators such as raptor birds concentrating more chemicals than other animals in the same environment. They are stored mainly in body fat. DDT and DDE are resistant to metabolism; in humans, their half-lives are 6 and up to 10 years, respectively. In the United States, these chemicals were detected in almost all human blood samples tested by the Centers for Disease Control in 2005, though their levels have sharply declined since most uses were banned. Estimated dietary intake has declined, although FDA food tests commonly detect it.
Despite being banned for many years, in 2018 research showed that DDT residues are still present in European soils and Spanish rivers.
Eggshell thinning
The chemical and its breakdown products DDE and DDD caused eggshell thinning and population declines in multiple North American and European bird of prey species. Both laboratory experiments and field studies confirmed this effect. The effect was first conclusively proven at Bellow Island in Lake Michigan during University of Michigan-funded studies on American herring gulls in the mid-1960s. DDE-related eggshell thinning is considered a major reason for the decline of the bald eagle, brown pelican, peregrine falcon and osprey. However, birds vary in their sensitivity to these chemicals, with birds of prey, waterfowl and song birds being more susceptible than chickens and related species. Even in 2010, California condors that feed on sea lions at Big Sur that in turn feed in the Palos Verdes Shelf area of the Montrose Chemical Superfund site exhibited continued thin-shell problems, though DDT's role in the decline of the California condor is disputed.
The biological thinning mechanism is not entirely understood, but DDE appears to be more potent than DDT, and strong evidence indicates that p,p-DDE inhibits calcium ATPase in the membrane of the shell gland and reduces the transport of calcium carbonate from blood into the eggshell gland. This results in a dose-dependent thickness reduction. Other evidence indicates that o,p'-DDT disrupts female reproductive tract development, later impairing eggshell quality. Multiple mechanisms may be at work, or different mechanisms may operate in different species.
Human health
DDT is an endocrine disruptor. It is considered likely to be a human carcinogen although the majority of studies suggest it is not directly genotoxic. DDE acts as a weak androgen receptor antagonist, but not as an estrogen. p,p-DDT, DDT's main component, has little or no androgenic or estrogenic activity. The minor component o,p-DDT has weak estrogenic activity.
Acute toxicity
DDT is classified as "moderately toxic" by the U.S. National Toxicology Program (NTP) and "moderately hazardous" by WHO, based on the rat oral of 113 mg/kg. Indirect exposure is considered relatively non-toxic for humans.
Chronic toxicity
Primarily through the tendency for DDT to build up in areas of the body with high lipid content, chronic exposure can affect reproductive capabilities and the embryo or fetus.
A review article in The Lancet states: "research has shown that exposure to DDT at amounts that would be needed in malaria control might cause preterm birth and early weaning ... toxicological evidence shows endocrine-disrupting properties; human data also indicate possible disruption in semen quality, menstruation, gestational length, and duration of lactation".
Other studies document decreases in semen quality among men with high exposures (generally from indoor residual spraying).
Studies are inconsistent on whether high blood DDT or DDE levels increase time to pregnancy. In mothers with high DDE blood serum levels, daughters may have up to a 32% increase in the probability of conceiving, but increased DDT levels have been associated with a 16% decrease in one study.
Indirect exposure of mothers through workers directly in contact with DDT is associated with an increase in spontaneous abortions.
Other studies found that DDT or DDE interfere with proper thyroid function in pregnancy and childhood.
Mothers with high levels of DDT circulating in their blood during pregnancy were found to be more likely to give birth to children who would go on to develop autism.
Carcinogenicity
In 2015, the International Agency for Research on Cancer classified DDT as Group 2A "probably carcinogenic to humans". Previous assessments by the U.S. National Toxicology Program classified it as "reasonably anticipated to be a carcinogen" and by the EPA classified DDT, DDE and DDD as class B2 "probable" carcinogens; these evaluations were based mainly on animal studies.
A 2005 Lancet review stated that occupational DDT exposure was associated with increased pancreatic cancer risk in 2 case control studies, but another study showed no DDE dose-effect association. Results regarding a possible association with liver cancer and biliary tract cancer are conflicting: workers who did not have direct occupational DDT contact showed increased risk. White men had an increased risk, but not white women or black men. Results about an association with multiple myeloma, prostate and testicular cancer, endometrial cancer and colorectal cancer have been inconclusive or generally do not support an association. A 2017 review of liver cancer studies concluded that "organochlorine pesticides, including DDT, may increase hepatocellular carcinoma risk".
A 2009 review, whose co-authors included persons engaged in DDT-related litigation, reached broadly similar conclusions, with an equivocal association with testicular cancer. Case–control studies did not support an association with leukemia or lymphoma.
Breast cancer
The question of whether DDT or DDE are risk factors in breast cancer has not been conclusively answered. Several meta analyses of observational studies have concluded that there is no overall relationship between DDT exposure and breast cancer risk. The United States Institute of Medicine reviewed data on the association of breast cancer with DDT exposure in 2012 and concluded that a causative relationship could neither be proven nor disproven.
A 2007 case-control study using archived blood samples found that breast cancer risk was increased 5-fold among women who were born prior to 1931 and who had high serum DDT levels in 1963. Reasoning that DDT use became widespread in 1945 and peaked around 1950, they concluded that the ages of 14–20 were a critical period in which DDT exposure leads to increased risk. This study, which suggests a connection between DDT exposure and breast cancer that would not be picked up by most studies, has received variable commentary in third-party reviews. One review suggested that "previous studies that measured exposure in older women may have missed the critical period". The National Toxicology Program notes that while the majority of studies have not found a relationship between DDT exposure and breast cancer that positive associations have been seen in a "few studies among women with higher levels of exposure and among certain subgroups of women".
A 2015 case control study identified a link (odds ratio 3.4) between in-utero exposure (as estimated from archived maternal blood samples) and breast cancer diagnosis in daughters. The findings "support classification of DDT as an endocrine disruptor, a predictor of breast cancer, and a marker of high risk".
Malaria control
Malaria remains the primary public health challenge in many countries. In 2015, there were 214 million cases of malaria worldwide resulting in an estimated 438,000 deaths, 90% of which occurred in Africa. DDT is one of many tools to fight the disease. Its use in this context has been called everything from a "miracle weapon [that is] like Kryptonite to the mosquitoes", to "toxic colonialism".
Before DDT, eliminating mosquito breeding grounds by drainage or poisoning with Paris green or pyrethrum was sometimes successful. In parts of the world with rising living standards, the elimination of malaria was often a collateral benefit of the introduction of window screens and improved sanitation. A variety of usually simultaneous interventions represents best practice. These include antimalarial drugs to prevent or treat infection; improvements in public health infrastructure to diagnose, sequester and treat infected individuals; bednets and other methods intended to keep mosquitoes from biting humans; and vector control strategies such as larviciding with insecticides, ecological controls such as draining mosquito breeding grounds or introducing fish to eat larvae and indoor residual spraying (IRS) with insecticides, possibly including DDT. IRS involves the treatment of interior walls and ceilings with insecticides. It is particularly effective against mosquitoes, since many species rest on an indoor wall before or after feeding. DDT is one of 12 WHO–approved IRS insecticides.
The WHO's anti-malaria campaign of the 1950s and 1960s relied heavily on DDT and the results were promising, though temporary in developing countries. Experts tie malarial resurgence to multiple factors, including poor leadership, management and funding of malaria control programs; poverty; civil unrest; and increased irrigation. The evolution of resistance to first-generation drugs (e.g. chloroquine) and to insecticides exacerbated the situation. Resistance was largely fueled by unrestricted agricultural use. Resistance and the harm both to humans and the environment led many governments to curtail DDT use in vector control and agriculture. In 2006 WHO reversed a longstanding policy against DDT by recommending that it be used as an indoor pesticide in regions where malaria is a major problem.
Once the mainstay of anti-malaria campaigns, as of 2019 only five countries used DDT for Indoor Residual Spraying
Initial effectiveness
When it was introduced in World War II, DDT was effective in reducing malaria morbidity and mortality. WHO's anti-malaria campaign, which consisted mostly of spraying DDT and rapid treatment and diagnosis to break the transmission cycle, was initially successful as well. For example, in Sri Lanka, the program reduced cases from about one million per year before spraying to just 18 in 1963 and 29 in 1964. Thereafter the program was halted to save money and malaria rebounded to 600,000 cases in 1968 and the first quarter of 1969. The country resumed DDT vector control but the mosquitoes had evolved resistance in the interim, presumably because of continued agricultural use. The program switched to malathion, but despite initial successes, malaria continued its resurgence into the 1980s.
DDT remains on WHO's list of insecticides recommended for IRS. After the appointment of Arata Kochi as head of its anti-malaria division, WHO's policy shifted from recommending IRS only in areas of seasonal or episodic transmission of malaria, to advocating it in areas of continuous, intense transmission. WHO reaffirmed its commitment to phasing out DDT, aiming "to achieve a 30% cut in the application of DDT world-wide by 2014 and its total phase-out by the early 2020s if not sooner" while simultaneously combating malaria. WHO plans to implement alternatives to DDT to achieve this goal.
South Africa continues to use DDT under WHO guidelines. In 1996, the country switched to alternative insecticides and malaria incidence increased dramatically. Returning to DDT and introducing new drugs brought malaria back under control. Malaria cases increased in South America after countries in that continent stopped using DDT. Research data showed a strong negative relationship between DDT residual house sprayings and malaria. In a research from 1993 to 1995, Ecuador increased its use of DDT and achieved a 61% reduction in malaria rates, while each of the other countries that gradually decreased its DDT use had large increases.
Mosquito resistance
In some areas, resistance reduced DDT's effectiveness. WHO guidelines require that absence of resistance must be confirmed before using the chemical. Resistance is largely due to agricultural use, in much greater quantities than required for disease prevention.
Resistance was noted early in spray campaigns. Paul Russell, former head of the Allied Anti-Malaria campaign, observed in 1956 that "resistance has appeared after six or seven years". Resistance has been detected in Sri Lanka, Pakistan, Turkey and Central America and it has largely been replaced by organophosphate or carbamate insecticides, e.g. malathion or bendiocarb.
In many parts of India, DDT is ineffective. Agricultural uses were banned in 1989 and its anti-malarial use has been declining. Urban use ended. One study concluded that "DDT is still a viable insecticide in indoor residual spraying owing to its effectivity in well supervised spray operation and high excito-repellency factor."
Studies of malaria-vector mosquitoes in KwaZulu-Natal Province, South Africa found susceptibility to 4% DDT (WHO's susceptibility standard), in 63% of the samples, compared to the average of 87% in the same species caught in the open. The authors concluded that "Finding DDT resistance in the vector An. arabiensis, close to the area where we previously reported pyrethroid-resistance in the vector An. funestus Giles, indicates an urgent need to develop a strategy of insecticide resistance management for the malaria control programmes of southern Africa."
DDT can still be effective against resistant mosquitoes and the avoidance of DDT-sprayed walls by mosquitoes is an additional benefit of the chemical. For example, a 2007 study reported that resistant mosquitoes avoided treated huts. The researchers argued that DDT was the best pesticide for use in IRS (even though it did not afford the most protection from mosquitoes out of the three test chemicals) because the other pesticides worked primarily by killing or irritating mosquitoes – encouraging the development of resistance. Others argue that the avoidance behavior slows eradication. Unlike other insecticides such as pyrethroids, DDT requires long exposure to accumulate a lethal dose; however its irritant property shortens contact periods. "For these reasons, when comparisons have been made, better malaria control has generally been achieved with pyrethroids than with DDT." In India outdoor sleeping and night duties are common, implying that "the excito-repellent effect of DDT, often reported useful in other countries, actually promotes outdoor transmission".
Residents' concerns
IRS is effective if at least 80% of homes and barns in a residential area are sprayed. Lower coverage rates can jeopardize program effectiveness. Many residents resist DDT spraying, objecting to the lingering smell, stains on walls, and the potential exacerbation of problems with other insect pests. Pyrethroid insecticides (e.g. deltamethrin and lambda-cyhalothrin) can overcome some of these issues, increasing participation.
Human exposure
A 1994 study found that South Africans living in sprayed homes have levels that are several orders of magnitude greater than others. Breast milk from South African mothers contains high levels of DDT and DDE. It is unclear to what extent these levels arise from home spraying vs food residues. Evidence indicates that these levels are associated with infant neurological abnormalities.
Most studies of DDT's human health effects have been conducted in developed countries where DDT is not used and exposure is relatively low.
Illegal diversion to agriculture is also a concern as it is difficult to prevent and its subsequent use on crops is uncontrolled. For example, DDT use is widespread in Indian agriculture, particularly mango production and is reportedly used by librarians to protect books. Other examples include Ethiopia, where DDT intended for malaria control is reportedly used in coffee production, and Ghana where it is used for fishing. The residues in crops at levels unacceptable for export have been an important factor in bans in several tropical countries. Adding to this problem is a lack of skilled personnel and management.
Criticism of restrictions on DDT use
Restrictions on DDT usage have been criticized by some organizations opposed to the environmental movement, including Roger Bate of the pro-DDT advocacy group Africa Fighting Malaria and the libertarian think tank Competitive Enterprise Institute; these sources oppose restrictions on DDT and attribute large numbers of deaths to such restrictions, sometimes in the millions. These arguments were rejected as "outrageous" by former WHO scientist Socrates Litsios. May Berenbaum, University of Illinois entomologist, says, "to blame environmentalists who oppose DDT for more deaths than Hitler is worse than irresponsible". More recently, Michael Palmer, a professor of chemistry at the University of Waterloo, has pointed out that DDT is still used to prevent malaria, that its declining use is primarily due to increases in manufacturing costs, and that in Africa, efforts to control malaria have been regional or local, not comprehensive.
Criticisms of a DDT "ban" often specifically reference the 1972 United States ban (with the erroneous implication that this constituted a worldwide ban and prohibited use of DDT in vector control). Reference is often made to Silent Spring, even though Carson never pushed for a DDT ban. John Quiggin and Tim Lambert wrote, "the most striking feature of the claim against Carson is the ease with which it can be refuted".
Investigative journalist Adam Sarvana and others characterize these notions as "myths" promoted principally by Roger Bate of the pro-DDT advocacy group Africa Fighting Malaria (AFM).
Alternatives
Insecticides
Organophosphate and carbamate insecticides, e.g. malathion and bendiocarb, respectively, are more expensive than DDT per kilogram and are applied at roughly the same dosage. Pyrethroids such as deltamethrin are also more expensive than DDT, but are applied more sparingly (0.02–0.3 g/m2 vs 1–2 g/m2), so the net cost per house per treatment is about the same. DDT has one of the longest residual efficacy periods of any IRS insecticide, lasting 6 to 12 months. Pyrethroids will remain active for only 4 to 6 months, and organophosphates and carbamates remain active for 2 to 6 months. In many malaria-endemic countries, malaria transmission occurs year-round, meaning that the high expense of conducting a spray campaign (including hiring spray operators, procuring insecticides, and conducting pre-spray outreach campaigns to encourage people to be home and to accept the intervention) will need to occur multiple times per year for these shorter-lasting insecticides.
In 2019, the related compound difluorodiphenyltrichloroethane (DFDT) was described as a potentially more effective and therefore potentially safer alternative to DDT.
Non-chemical vector control
Before DDT, malaria was successfully eliminated or curtailed in several tropical areas by removing or poisoning mosquito breeding grounds and larva habitats, for example by eliminating standing water. These methods have seen little application in Africa for more than half a century. According to CDC, such methods are not practical in Africa because "Anopheles gambiae, one of the primary vectors of malaria in Africa, breeds in numerous small pools of water that form due to rainfall ... It is difficult, if not impossible, to predict when and where the breeding sites will form, and to find and treat them before the adults emerge."
The relative effectiveness of IRS versus other malaria control techniques (e.g. bednets or prompt access to anti-malarial drugs) varies and is dependent on local conditions.
A WHO study released in January 2008 found that mass distribution of insecticide-treated mosquito nets and artemisinin–based drugs cut malaria deaths in half in malaria-burdened Rwanda and Ethiopia. IRS with DDT did not play an important role in mortality reduction in these countries.
Vietnam has enjoyed declining malaria cases and a 97% mortality reduction after switching in 1991 from a poorly funded DDT-based campaign to a program based on prompt treatment, bednets and pyrethroid group insecticides.
In Mexico, effective and affordable chemical and non-chemical strategies were so successful that the Mexican DDT manufacturing plant ceased production due to lack of demand.
A review of fourteen studies in sub-Saharan Africa, covering insecticide-treated nets, residual spraying, chemoprophylaxis for children, chemoprophylaxis or intermittent treatment for pregnant women, a hypothetical vaccine and changing front–line drug treatment, found decision making limited by the lack of information on the costs and effects of many interventions, the small number of cost-effectiveness analyses, the lack of evidence on the costs and effects of packages of measures and the problems in generalizing or comparing studies that relate to specific settings and use different methodologies and outcome measures. The two cost-effectiveness estimates of DDT residual spraying examined were not found to provide an accurate estimate of the cost-effectiveness of DDT spraying; the resulting estimates may not be good predictors of cost-effectiveness in current programs.
However, a study in Thailand found the cost per malaria case prevented of DDT spraying (US$1.87) to be 21% greater than the cost per case prevented of lambda-cyhalothrin–treated nets (US$1.54), casting some doubt on the assumption that DDT was the most cost-effective measure. The director of Mexico's malaria control program found similar results, declaring that it was 25% cheaper for Mexico to spray a house with synthetic pyrethroids than with DDT. However, another study in South Africa found generally lower costs for DDT spraying than for impregnated nets.
A more comprehensive approach to measuring the cost-effectiveness or efficacy of malarial control would not only measure the cost in dollars, as well as the number of people saved, but would also consider ecological damage and negative human health impacts. One preliminary study found that it is likely that the detriment to human health approaches or exceeds the beneficial reductions in malarial cases, except perhaps in epidemics. It is similar to the earlier study regarding estimated theoretical infant mortality caused by DDT and subject to the criticism also mentioned earlier.
A study in the Solomon Islands found that "although impregnated bed nets cannot entirely replace DDT spraying without substantial increase in incidence, their use permits reduced DDT spraying".
A comparison of four successful programs against malaria in Brazil, India, Eritrea and Vietnam does not endorse any single strategy but instead states, "Common success factors included conducive country conditions, a targeted technical approach using a package of effective tools, data-driven decision-making, active leadership at all levels of government, involvement of communities, decentralized implementation and control of finances, skilled technical and managerial capacity at national and sub-national levels, hands-on technical and programmatic support from partner agencies, and sufficient and flexible financing."
DDT resistant mosquitoes may be susceptible to pyrethroids in some countries. However, pyrethroid resistance in Anopheles mosquitoes is on the rise with resistant mosquitoes found in multiple countries.
See also
DDT in New Zealand
Operation Cat Drop
Environmental hazard
Index of pesticide articles
Pest control
Pesticide
Pesticide residue
Pesticide standard value
WHO Pesticide Evaluation Scheme
Mosquito control
References
Further reading
Berry-Cabán, Cristóbal S. "DDT and silent spring: fifty years after". Journal of Military and Veterans' Health 19 (2011): 19–24. online
Conis, Elena. "Debating the health effects of DDT: Thomas Jukes, Charles Wurster, and the fate of an environmental pollutant". Public Health Reports 125.2 (2010): 337–342. online
Davis, Frederick Rowe. "Pesticides and the perils of synecdoche in the history of science and environmental history". History of Science 57.4 (2019): 469–492.
"DDT Banning" in Richard L. Wilson, ed. Historical Encyclopedia of American Business, Vol I. Accounting Industry – Google, (Salem Press: 2009) p. 223 .
Dunlap, Thomas, ed. DDT, Silent Spring, and the Rise of Environmentalism (University of Washington Press, 2008).
Dunlap, Thomas, ed. DDT, Silent Spring, and the Rise of Environmentalism: Classic texts (University of Washington Press, 2015). .
Kinkela, David. DDT and the American Century: Global Health, Environmental Politics, and the Pesticide That Changed the World (University of North Carolina Press, 2011). .
Morris, Peter J. T. (2019). "Chapter 9: A Tale of Two Nations: DDT in the United States and the United Kingdom". Hazardous Chemicals: Agents of Risk and Change, 1800–2000. Environment in History: International Perspectives 17. Berghahn Books. 294–327. (book: ; ).
External links
Chemistry
DDT at The Periodic Table of Videos (University of Nottingham)
Toxicity
Scorecard: The Pollution Information Site – DDT
Interview with Barbara Cohn, PhD about DDT and breast cancer
Pesticide residues in food 2000 : DDT
Politics and DDT
Malaria and DDT
'Andrew Spielman, Harvard School of Public Health, discusses environmentally friendly control of Malaria and uses of DDT Freeview video provided by the Vega Science Trust
DDT in popular culture
Phil Allegretti Pesticide Collection consisting of ephemera and 3-D objects, including cans, sprayers, and diffusers, related to DDT pesticide and insecticide in the United States in the mid-20th century (all images freely available for download in variety of formats from Science History Institute Digital Collections at digital.sciencehistory.org).
4-Chlorophenyl compounds
Endocrine disruptors
Environmental controversies
Environmental effects of pesticides
GPER agonists
IARC Group 2A carcinogens
Malaria
Nonsteroidal antiandrogens
Persistent organic pollutants under the Stockholm Convention
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution
Pesticides
Sodium channel openers
Trichloromethyl compounds | DDT | [
"Chemistry",
"Biology",
"Environmental_science"
] | 9,734 | [
"Pesticides",
"Toxicology",
"Persistent organic pollutants under the Stockholm Convention",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Endocrine disruptors",
"Biocides"
] |
8,495 | https://en.wikipedia.org/wiki/Data%20set | A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files.
In the open data discipline, data set is the unit to measure the information released in a public open data repository. The European data.europa.eu portal aggregates more than a million data sets.
Properties
Several characteristics define a data set's structure and properties. These include the number and types of the attributes or variables, and various statistical measures applicable to them, such as standard deviation and kurtosis.
The values may be numbers, such as real numbers or integers, for example representing a person's height in centimeters, but may also be nominal data (i.e., not consisting of numerical values), for example representing a person's ethnicity. More generally, values may be of any of the kinds described as a level of measurement. For each variable, the values are normally all of the same kind. Missing values may exist, which must be indicated somehow.
In statistics, data sets usually come from actual observations obtained by sampling a statistical population, and each row corresponds to the observations on one element of that population. Data sets may further be generated by algorithms for the purpose of testing certain kinds of software. Some modern statistical analysis software such as SPSS still present their data in the classical data set fashion. If data is missing or suspicious an imputation method may be used to complete a data set.
Classics
Several classic data sets have been used extensively in the statistical literature:
Iris flower data set – Multivariate data set introduced by Ronald Fisher (1936). Provided online by University of California-Irvine Machine Learning Repository.
MNIST database – Images of handwritten digits commonly used to test classification, clustering, and image processing algorithms
Categorical data analysis – Data sets used in the book, An Introduction to Categorical Data Analysis, provided online by UCLA Advanced Research Computing.
Robust statistics – Data sets used in Robust Regression and Outlier Detection (Rousseeuw and Leroy, 1968). Provided online at the University of Cologne.
Time series – Data used in Chatfield's book, The Analysis of Time Series, are provided on-line by StatLib.
Extreme values – Data used in the book, An Introduction to the Statistical Modeling of Extreme Values are a snapshot of the data as it was provided on-line by Stuart Coles, the book's author.
Bayesian Data Analysis – Data used in the book are provided on-line (archive link) by Andrew Gelman, one of the book's authors.
The Bupa liver data – Used in several papers in the machine learning (data mining) literature.
Anscombe's quartet – Small data set illustrating the importance of graphing the data to avoid statistical fallacies.
Example
Loading datasets using Python:
pip install datasets
from datasets import load_dataset
dataset = load_dataset(NAME OF DATASET)
See also
List of datasets for machine-learning research
List of datasets in computer vision and image processing
Data blending
Data (computer science)
Sampling
Data store
Interoperability
Data collection system
References
External links
Data.gov – the U.S. Government's open data
GCMD – the Global Change Master Directory containing over 34,000 descriptions of Earth science and environmental science data sets and services
Humanitarian Data Exchange(HDX) – The Humanitarian Data Exchange (HDX) is an open humanitarian data sharing platform managed by the United Nations Office for the Coordination of Humanitarian Affairs.
NYC Open Data – free public data published by New York City agencies and other partners.
Relational data set repository
Research Pipeline – a wiki/website with links to data sets on many different topics
StatLib–JASA Data Archive
UCI – a machine learning repository
UK Government Public Data
World Bank Open Data – Free and open access to global development data by World Bank
Computer data
Statistical data sets | Data set | [
"Technology"
] | 868 | [
"Computer data",
"Data"
] |
8,524 | https://en.wikipedia.org/wiki/Deuterium | Deuterium (hydrogen-2, symbol H or D, also known as heavy hydrogen) is one of two stable isotopes of hydrogen; the other is protium, or hydrogen-1, H. The deuterium nucleus (deuteron) contains one proton and one neutron, whereas the far more common H has no neutrons. Deuterium has a natural abundance in Earth's oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. Thus, deuterium accounts for about 0.0156% by number (0.0312% by mass) of all hydrogen in the ocean: tonnes of deuterium – mainly as HOD (or HOH or HHO) and only rarely as DO (or HO) (deuterium oxide, also known as heavy water) – in tonnes of water. The abundance of H changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water).
The name deuterium comes from Greek deuteros, meaning "second". American chemist Harold Urey discovered deuterium in 1931. Urey and others produced samples of heavy water in which the H had been highly concentrated. The discovery of deuterium won Urey a Nobel Prize in 1934.
Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of H to H (≈26 atoms of deuterium per 10 hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. The analysis of deuterium–protium ratios (HHR) in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per 10 hydrogen atoms). This reinforces theories that much of Earth's ocean water is of cometary origin. The HHR of comet 67P/Churyumov–Gerasimenko, as measured by the Rosetta space probe, is about three times that of Earth water. This figure is the highest yet measured in a comet. HHR's thus continue to be an active topic of research in both astronomy and climatology.
Differences from common hydrogen (protium)
Chemical symbol
Deuterium is often represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by H. IUPAC allows both D and H, though H is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (H) confers non-negligible chemical differences with H compounds. Deuterium has a mass of , about twice the mean hydrogen atomic weight of , or twice protium's mass of . The isotope weight ratios within other elements are largely insignificant in this regard.
Spectroscopy
In quantum mechanics, the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For a hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels.
The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the nucleus. For H, this amount is about , or 1.000545, and for H it is even smaller: , or 1.0002725. The energies of electronic spectra lines for H and H therefore differ by the ratio of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by 0.0272%. In astronomical observation, this corresponds to a blue Doppler shift of 0.0272% of the speed of light, or 81.6 km/s.
The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, though deuterium NMR on its own right is also possible.
Big Bang nucleosynthesis
Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium.
Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore, any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused into helium). However, very soon thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion or nucleosynthesis. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (no stable nucleus has a mass number of 5 or 8) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as the Sun.
Abundance
Deuterium occurs in trace amounts naturally as deuterium gas (H or D), but most deuterium atoms in the Universe are bonded with H to form a gas called hydrogen deuteride (HD or HH). Similarly, natural water contains deuterated molecules, almost all as semiheavy water HDO with only one deuterium.
The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there is no known natural process other than Big Bang nucleosynthesis that might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton–proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of H seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it.
The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in the Milky Way galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space.
The abundance of deuterium in Jupiter's atmosphere has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial Solar System ratio. This is about 17% of the terrestrial ratio of 156 deuterium atoms per million hydrogen atoms.
Comets such as Comet Hale-Bopp and Halley's Comet have been measured to contain more deuterium (about 200 atoms per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans (155.76 ± 0.1, but in fact from 153 to 156 ppm), emphasizes the theory that Earth's surface water may be largely from comets. Most recently the HHR of 67P/Churyumov–Gerasimenko as measured by Rosetta is about three times that of Earth water. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin.
Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus.
Production
Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods.
In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.
The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design.
Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurized heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulfide gas at high pressure.
While India is self-sufficient in heavy water for its own use, India also exports reactor-grade heavy water.
Properties
Data for molecular deuterium
Formula: or
Density: 0.180 kg/m at STP (0 °C, 101325 Pa).
Atomic weight: 2.0141017926 Da.
Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 atoms of deuterium per million atoms of all isotopes of hydrogen (about 1 atom of in 6420); that is, about 0.015% of all atoms of hydrogen (any isotope)
Data at about 18 K for H (triple point):
Density:
Liquid: 162.4 kg/m
Gas: 0.452 kg/m
Liquefied HO: 1105.2 kg/m at STP
Viscosity: 12.6 μPa·s at 300 K (gas phase)
Specific heat capacity at constant pressure c:
Solid: 2950 J/(kg·K)
Gas: 5200 J/(kg·K)
Physical properties
Compared to hydrogen in its natural composition on Earth, pure deuterium (H) has a higher melting point (18.72 K vs. 13.99 K), a higher boiling point (23.64 vs. 20.27 K), a higher critical temperature (38.3 vs. 32.94 K) and a higher critical pressure (1.6496 vs. 1.2858 MPa).
The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. HO, for example, is more viscous than normal . There are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that H is harder to remove from carbon than H.
Deuterium can replace H in water molecules to form heavy water (HO), which is about 10.6% denser than normal water (so that ice made from it sinks in normal water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a person might drink of heavy water without serious consequences. Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals.
Quantum properties
The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from normal hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry.
The triplet deuteron nucleon is barely bound at , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of . There is no such stable particle, but this virtual particle transiently exists during neutron–proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton.
Nuclear properties (deuteron)
Deuteron mass and radius
The deuterium nucleus is called a deuteron. It has a mass of (just over ).
The charge radius of a deuteron is
Like the proton radius, measurements using muonic deuterium produce a smaller result: .
Spin and energy
Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (H, Li, B, N, Ta; the long-lived radionuclides K, V, La, Lu also occur naturally.) Most odd–odd nuclei are unstable to beta decay, because the decay products are even–even, and thus more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, mainly due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron to be unstable.
The proton and neutron in deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment.
Diatomic deuterium (H) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one.
Isospin singlet state of the deuteron
Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T).
Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin-1/2, form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is
, which can also be written :
This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is
and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable.
Approximated wavefunction of the deuteron
The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. has an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. has an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative).
The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states:
Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry.
Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry.
In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l. The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has , .
In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has , .
Since gives a stronger nuclear attraction, the deuterium ground state is in the , state.
The same considerations lead to the possible states of an isospin triplet having , or , . Thus, the state of lowest energy has , , higher than that of the isospin singlet.
The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as , may become a state of , . Parity is still constant in time, so these do not mix with odd l states (such as , ). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the , state and the , state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore . This is the total spin of the deuterium nucleus.
To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly with some .
Magnetic and electric multipoles
In order to find theoretically the deuterium magnetic dipole moment μ, one uses the formula for a nuclear magnetic moment
with
g and g are g-factors of the nucleons.
Since the proton and neutron have different values for g and g, one must separate their contributions. Each gets half of the deuterium orbital angular momentum and spin . One arrives at
where subscripts p and n stand for the proton and neutron, and .
By using the same identities as here and using the value , one gets the following result, in units of the nuclear magneton μ
For the , state (), we obtain
For the , state (), we obtain
The measured value of the deuterium magnetic dipole moment, is , which is 97.5% of the value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation , state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment.
But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly , state with a slight admixture of , state.
The electric dipole is zero as usual.
The measured electric quadrupole of the deuterium is . While the order of magnitude is reasonable, since the deuteron radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the state (which is the dominant one) and does get a contribution from a term mixing the and the states, because the electric quadrupole operator does not commute with angular momentum.
The latter contribution is dominant in the absence of a pure contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium.
Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons.
Applications
Nuclear reactors
Deuterium is used in heavy water moderated fission reactors, usually as liquid HO, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium.
In research reactors, liquid H is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments.
Experimentally, deuterium is the most common nuclide used in fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the deuterium–tritium (DT) reaction. There is an even higher-yield H–He fusion reaction, though the breakeven point of H–He is higher than that of most other fusion reactions; together with the scarcity of He, this makes it implausible as a practical power source, at least until DT and deuterium–deuterium (DD) fusion have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology.
NMR spectroscopy
Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for H. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl or CHCl, are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference.
Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (deuterium NMR). For example, the configuration of hydrocarbon chains in lipid bilayers can be quantified using solid state deuterium NMR with deuterium-labelled lipid molecules.
Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example.
Mass spectrometry
Deuterated (i.e. where all or some hydrogen atoms are replaced with deuterium) compounds are often used as internal standards in mass spectrometry. Like other isotopically labeled species, such standards improve accuracy, while often at a much lower cost than other isotopically labeled standards. Deuterated molecules are usually prepared via hydrogen isotope exchange reactions.
Tracing
In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from normal hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; H–carbon bond vibrations are found in spectral regions free of other signals.
Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes O and O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of H to H is usually indicated with a delta as δH and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins.
Contrast properties
Neutron scattering techniques particularly profit from availability of deuterated samples: The H and H cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of normal hydrogen is its large incoherent neutron cross section, which is nil for H. The substitution of deuterium for normal hydrogen thus reduces scattering noise.
Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen atoms (including deuterium) interact strongly with neutrons; neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas.
Nuclear weapons
See below. Most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements; yet such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion in hydrogen bombs, requires heavy hydrogen (deuterium, tritium, or both).
Drugs
A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval.
Reinforced essential nutrients
Deuterium can be used to reinforce specific oxidation-vulnerable C–H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia.
Thermostabilization
Live vaccines, such as oral polio vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl.
Slowing circadian oscillations
Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. In rats, chronic intake of 25% HO disrupts circadian rhythm by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period.
History
Suspicion of lighter element isotopes
The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. At that time the neutron had not yet been discovered, and the prevailing theory was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes.
Deuterium detected
It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards (now National Institute of Standards and Technology) in Washington, DC. The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous.
Naming of the isotope and Nobel Prize
Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from Gilbert N. Lewis who had proposed the name "deutium". The name comes from Greek deuteros 'second', and the nucleus was to be called a "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted to call the isotope "diplogen", from Greek diploos 'double', and the nucleus to be called "diplon".
The amount inferred for normal abundance of deuterium was so small (only about 1 atom in 6400 hydrogen atoms in seawater [156 parts per million]) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis, Urey's graduate advisor at Berkeley, had prepared and characterized the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory; but when the neutron was reported, making deuterium's existence more explicable, Urey was awarded the Nobel Prize in Chemistry only three years after the isotope's isolation. Lewis was deeply disappointed by the Nobel Committee's decision in 1934 and several high-ranking administrators at Berkeley believed this disappointment played a central role in his suicide a decade later.
"Heavy water" experiments in World War II
Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums.
During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war.
After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. The Germans had completed only a small, partly built experimental reactor (which had been hidden away) and had been unable to sustain a chain reaction. By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with Chicago Pile-1 in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example.
In thermonuclear weapons
The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful hydrogen bomb (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction.
Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. "Pure" fusion weapons such as the Tsar Bomba are believed to be obsolete. In most modern ("boosted") thermonuclear weapons, fusion directly provides only a small fraction of the total energy. Fission of a natural uranium-238 tamper by fast neutrons produced from D–T fusion accounts for a much larger (i.e. boosted) energy release than the fusion reaction itself.
Modern research
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand gas giant planets, such as Jupiter, Saturn and some exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
Antideuterium
An antideuteron is the antimatter counterpart of the deuteron, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but antideuterium has not yet been created. The proposed symbol for antideuterium is , that is, D with an overbar.
See also
Isotopes of hydrogen
Tokamak
References
External links
Environmental isotopes
Isotopes of hydrogen
Neutron moderators
Nuclear fusion fuels
Nuclear materials
Subatomic particles with spin 1
Medical isotopes | Deuterium | [
"Physics",
"Chemistry"
] | 8,627 | [
"Isotopes of hydrogen",
"Environmental isotopes",
"Isotopes",
"Materials",
"Nuclear materials",
"Chemicals in medicine",
"Matter",
"Medical isotopes"
] |
8,525 | https://en.wikipedia.org/wiki/Digital%20signal%20processing | Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.
Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar, radar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, data compression, video coding, audio coding, image compression, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others.
DSP can involve linear or nonlinear operations. Nonlinear signal processing is closely related to nonlinear system identification and can be implemented in the time, frequency, and spatio-temporal domains.
The application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression. Digital signal processing is also fundamental to digital technology, such as digital telecommunication and wireless communications. DSP is applicable to both streaming data and static (stored) data.
Signal sampling
To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter (ADC). Sampling is usually carried out in two stages, discretization and quantization. Discretization means that the signal is divided into equal intervals of time, and each interval is represented by a single measurement of amplitude. Quantization means each amplitude measurement is approximated by a value from a finite set. Rounding real numbers to integers is an example.
The Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is often significantly higher than this. It is common to use an anti-aliasing filter to limit the signal bandwidth to comply with the sampling theorem, however careful selection of this filter is required because the reconstructed signal will be the filtered signal plus residual aliasing from imperfect stop band rejection instead of the original (unfiltered) signal.
Theoretical DSP analyses and derivations are typically performed on discrete-time signal models with no amplitude inaccuracies (quantization error), created by the abstract process of sampling. Numerical methods require a quantized signal, such as those produced by an ADC. The processed result might be a frequency spectrum or a set of statistics. But often it is another quantized signal that is converted back to analog form by a digital-to-analog converter (DAC).
Domains
DSP engineers usually study digital signals in one of the following domains: time domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain, and wavelet domains. They choose the domain in which to process a signal by making an informed assumption (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain representation.
Time and space domains
Time domain refers to the analysis of signals with respect to time. Similarly, space domain refers to the analysis of signals with respect to position, e.g., pixel location for the case of image processing.
The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering generally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. The surrounding samples may be identified with respect to time or space. The output of a linear digital filter to any given input may be calculated by convolving the input signal with an impulse response.
Frequency domain
Signals are converted from time or space domain to the frequency domain usually through use of the Fourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant, often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.
The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is also called spectrum- or spectral analysis.
Filtering, particularly in non-realtime work can also be achieved in the frequency domain, applying the filter and then converting back to the time domain. This can be an efficient implementation and can give essentially any filter response including excellent approximations to brickwall filters.
There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic structure of the original spectrum.
Z-plane analysis
Digital filters come in both infinite impulse response (IIR) and finite impulse response (FIR) types. Whereas FIR filters are always stable, IIR filters have feedback loops that may become unstable and oscillate. The Z-transform provides a tool for analyzing stability issues of digital IIR filters. It is analogous to the Laplace transform, which is used to design and analyze analog IIR filters.
Autoregression analysis
A signal is represented as linear combination of its previous samples. Coefficients of the combination are called autoregression coefficients. This method has higher frequency resolution and can process shorter signals compared to the Fourier transform. Prony's method can be used to estimate phases, amplitudes, initial phases and decays of the components of signal. Components are assumed to be complex decaying exponents.
Time-frequency analysis
A time-frequency representation of signal can capture both temporal evolution and frequency structure of analyzed signal. Temporal and frequency resolution are limited by the principle of uncertainty and the tradeoff is adjusted by the width of analysis window. Linear techniques such as Short-time Fourier transform, wavelet transform, filter bank, non-linear (e.g., Wigner–Ville transform) and autoregressive methods (e.g. segmented Prony method) are used for representation of signal on the time-frequency plane. Non-linear and segmented Prony methods can provide higher resolution, but may produce undesirable artifacts. Time-frequency analysis is usually used for analysis of non-stationary signals. For example, methods of fundamental frequency estimation, such as RAPT and PEFAC are based on windowed spectral analysis.
Wavelet
In numerical analysis and functional analysis, a discrete wavelet transform is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information. The accuracy of the joint time-frequency resolution is limited by the uncertainty principle of time-frequency.
Empirical mode decomposition
Empirical mode decomposition is based on decomposition signal into intrinsic mode functions (IMFs). IMFs are quasi-harmonical oscillations that are extracted from the signal.
Implementation
DSP algorithms may be run on general-purpose computers and digital signal processors. DSP algorithms are also implemented on purpose-built hardware such as application-specific integrated circuit (ASICs). Additional technologies for digital signal processing include more powerful general-purpose microprocessors, graphics processing units, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial applications such as motor control), and stream processors.
For systems that do not have a real-time computing requirement and the signal data (either input or output) exists in data files, processing may be done economically with a general-purpose computer. This is essentially no different from any other data processing, except DSP mathematical techniques (such as the DCT and FFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. An example of such an application is processing digital photographs with software such as Photoshop.
When the application requirement is real-time, DSP is often implemented using specialized or dedicated processors or microprocessors, sometimes using multiple processors or multiple processing cores. These may process data using fixed-point arithmetic or floating point. For more demanding applications FPGAs may be used. For the most demanding applications or high-volume products, ASICs might be designed specifically for the application.
Parallel implementations of DSP algorithms, utilizing multi-core CPU and many-core GPU architectures, are developed to improve the performances in terms of latency of these algorithms.
is done by the computer's CPU rather than by DSP or outboard processing, which is done by additional third-party DSP chips located on extension cards or external hardware boxes or racks. Many digital audio workstations such as Logic Pro, Cubase, Digital Performer and Pro Tools LE use native processing. Others, such as Pro Tools HD, Universal Audio's UAD-1 and TC Electronic's Powercore use DSP processing.
Applications
General application areas for DSP include
Audio signal processing
Audio data compression e.g. MP3
Video data compression
Computer graphics
Digital image processing
Photo manipulation
Speech processing
Speech recognition
Data transmission
Radar
Sonar
Financial signal processing
Economic forecasting
Seismology
Biomedicine
Weather forecasting
Specific examples include speech coding and transmission in digital mobile phones, room correction of sound in hi-fi and sound reinforcement applications, analysis and control of industrial processes, medical imaging such as CAT scans and MRI, audio crossovers and equalization, digital synthesizers, and audio effects units. DSP has been used in hearing aid technology since 1996, which allows for automatic directional microphones, complex digital noise reduction, and improved adjustment of the frequency response.
Techniques
Bilinear transform
Discrete Fourier transform
Discrete-time Fourier transform
Filter design
Goertzel algorithm
Least-squares spectral analysis
LTI system theory
Minimum phase
s-plane
Transfer function
Z-transform
Related fields
Analog signal processing
Automatic control
Computer engineering
Computer science
Data compression
Dataflow programming
Discrete cosine transform
Electrical engineering
Fourier analysis
Information theory
Machine learning
Real-time computing
Stream processing
Telecommunications
Time series
Wavelet
Further reading
Jonathan M. Blackledge, Martin Turner: Digital Signal Processing: Mathematical and Computational Methods, Software Development and Applications, Horwood Publishing,
James D. Broesch: Digital Signal Processing Demystified, Newnes,
Paul M. Embree, Damon Danieli: C++ Algorithms for Digital Signal Processing, Prentice Hall,
Hari Krishna Garg: Digital Signal Processing Algorithms, CRC Press,
P. Gaydecki: Foundations Of Digital Signal Processing: Theory, Algorithms And Hardware Design, Institution of Electrical Engineers,
Ashfaq Khan: Digital Signal Processing Fundamentals, Charles River Media,
Sen M. Kuo, Woon-Seng Gan: Digital Signal Processors: Architectures, Implementations, and Applications, Prentice Hall,
Paul A. Lynn, Wolfgang Fuerst: Introductory Digital Signal Processing with Computer Applications, John Wiley & Sons,
Richard G. Lyons: Understanding Digital Signal Processing, Prentice Hall,
Vijay Madisetti, Douglas B. Williams: The Digital Signal Processing Handbook, CRC Press,
James H. McClellan, Ronald W. Schafer, Mark A. Yoder: Signal Processing First, Prentice Hall,
Bernard Mulgrew, Peter Grant, John Thompson: Digital Signal Processing – Concepts and Applications, Palgrave Macmillan,
Boaz Porat: A Course in Digital Signal Processing, Wiley,
John G. Proakis, Dimitris Manolakis: Digital Signal Processing: Principles, Algorithms and Applications, 4th ed, Pearson, April 2006,
John G. Proakis: A Self-Study Guide for Digital Signal Processing, Prentice Hall,
Charles A. Schuler: Digital Signal Processing: A Hands-On Approach, McGraw-Hill,
Doug Smith: Digital Signal Processing Technology: Essentials of the Communications Revolution, American Radio Relay League,
Hayes, Monson H. Statistical digital signal processing and modeling. John Wiley & Sons, 2009. (with MATLAB scripts)
References
Digital electronics
Computer engineering
Telecommunication theory
Radar signal processing | Digital signal processing | [
"Technology",
"Engineering"
] | 2,540 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering",
"Digital electronics"
] |
8,528 | https://en.wikipedia.org/wiki/Disjunction%20introduction | Disjunction introduction or addition (also called or introduction) is a rule of inference of propositional logic and almost every other deduction system. The rule makes it possible to introduce disjunctions to logical proofs. It is the inference that if P is true, then P or Q must be true.
An example in English:
Socrates is a man.
Therefore, Socrates is a man or pigs are flying in formation over the English Channel.
The rule can be expressed as:
where the rule is that whenever instances of "" appear on lines of a proof, "" can be placed on a subsequent line.
More generally it's also a simple valid argument form, this means that if the premise is true, then the conclusion is also true as any rule of inference should be, and an immediate inference, as it has a single proposition in its premises.
Disjunction introduction is not a rule in some paraconsistent logics because in combination with other rules of logic, it leads to explosion (i.e. everything becomes provable) and paraconsistent logic tries to avoid explosion and to be able to reason with contradictions. One of the solutions is to introduce disjunction with over rules. See .
Formal notation
The disjunction introduction rule may be written in sequent notation:
where is a metalogical symbol meaning that is a syntactic consequence of in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
where and are propositions expressed in some formal system.
References
Rules of inference
Paraconsistent logic
Theorems in propositional logic | Disjunction introduction | [
"Mathematics"
] | 336 | [
"Theorems in propositional logic",
"Rules of inference",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
8,529 | https://en.wikipedia.org/wiki/Disjunction%20elimination | In propositional logic, disjunction elimination (sometimes named proof by cases, case analysis, or or elimination) is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a logical proof. It is the inference that if a statement implies a statement and a statement also implies , then if either or is true, then has to be true. The reasoning is simple: since at least one of the statements P and R is true, and since either of them would be sufficient to entail Q, Q is certainly true.
An example in English:
If I'm inside, I have my wallet on me.
If I'm outside, I have my wallet on me.
It is true that either I'm inside or I'm outside.
Therefore, I have my wallet on me.
It is the rule can be stated as:
where the rule is that whenever instances of "", and "" and "" appear on lines of a proof, "" can be placed on a subsequent line.
Formal notation
The disjunction elimination rule may be written in sequent notation:
where is a metalogical symbol meaning that is a syntactic consequence of , and and in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:
where , , and are propositions expressed in some formal system.
See also
Disjunction
Argument in the alternative
Disjunct normal form
Proof by exhaustion
References
Rules of inference
Theorems in propositional logic | Disjunction elimination | [
"Mathematics"
] | 313 | [
"Rules of inference",
"Theorems in propositional logic",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
8,536 | https://en.wikipedia.org/wiki/Differential%20cryptanalysis | Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
History
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography." Within IBM, differential cryptanalysis was known as the "T-attack" or "Tickle attack".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 247 chosen plaintexts.
Attack mechanics
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintexts related by a constant difference. Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a differential. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials where
(and ⊕ denotes exclusive or) for each such S-box S. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than an exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least r − 1 rounds, where r is the total number of rounds. The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including the Advanced Encryption Standard, have been proven secure against the attack.
Attack in detail
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or S-boxes). Observing the desired output difference (between two chosen or known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2−(n − 1) as possible to achieve differential uniformity. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much weaker non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50 or 2−300 which is far lower than the required threshold of 2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x3 in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks. That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion.
Specialized types
Higher-order differential cryptanalysis
Truncated differential cryptanalysis
Impossible differential cryptanalysis
Boomerang attack
See also
Cryptography
Integral cryptanalysis
Linear cryptanalysis
Differential equations of addition
References
Further reading
External links
A tutorial on differential (and linear) cryptanalysis
Helger Lipmaa's links on differential cryptanalysis
Cryptographic attacks | Differential cryptanalysis | [
"Technology"
] | 1,674 | [
"Cryptographic attacks",
"Computer security exploits"
] |
8,560 | https://en.wikipedia.org/wiki/Design | A design is the concept of or proposal for an object, process, or system. The word design refers to something that is or has been intentionally created by a thinking agent, and is sometimes used to refer to the inherent nature of something – its design. The verb to design expresses the process of developing a design. In some cases, the direct construction of an object without an explicit prior plan may also be considered to be a design (such as in arts and crafts). A design is expected to have a purpose within a certain context, usually having to satisfy certain goals and constraints and to take into account aesthetic, functional, economic, environmental, or socio-political considerations. Traditional examples of designs include architectural and engineering drawings, circuit diagrams, sewing patterns, and less tangible artefacts such as business process models.
Designing
People who produce designs are called designers. The term 'designer' usually refers to someone who works professionally in one of the various design areas. Within the professions, the word 'designer' is generally qualified by the area of practice (for example: a fashion designer, a product designer, a web designer, or an interior designer), but it can also designate other practitioners such as architects and engineers (see below: Types of designing). A designer's sequence of activities to produce a design is called a design process, with some employing designated processes such as design thinking and design methods. The process of creating a design can be brief (a quick sketch) or lengthy and complicated, involving considerable research, negotiation, reflection, modeling, interactive adjustment, and re-design.
Designing is also a widespread activity outside of the professions of those formally recognized as designers. In his influential book The Sciences of the Artificial, the interdisciplinary scientist Herbert A. Simon proposed that, "Everyone designs who devises courses of action aimed at changing existing situations into preferred ones." According to the design researcher Nigel Cross, "Everyone can – and does – design," and "Design ability is something that everyone has, to some extent, because it is embedded in our brains as a natural cognitive function."
History of design
The study of design history is complicated by varying interpretations of what constitutes 'designing'. Many design historians, such as John Heskett, look to the Industrial Revolution and the development of mass production. Others subscribe to conceptions of design that include pre-industrial objects and artefacts, beginning their narratives of design in prehistoric times. Originally situated within art history, the historical development of the discipline of design history coalesced in the 1970s, as interested academics worked to recognize design as a separate and legitimate target for historical research. Early influential design historians include German-British art historian Nikolaus Pevsner and Swiss historian and architecture critic Sigfried Giedion.
Design education
In Western Europe, institutions for design education date back to the nineteenth century. The Norwegian National Academy of Craft and Art Industry was founded in 1818, followed by the United Kingdom's Government School of Design (1837), and Konstfack in Sweden (1844). The Rhode Island School of Design was founded in the United States in 1877. The German art and design school Bauhaus, founded in 1919, greatly influenced modern design education.
Design education covers the teaching of theory, knowledge and values in the design of products, services, and environments, with a focus on the development of both particular and general skills for designing. Traditionally, its primary orientation has been to prepare students for professional design practice, based on project work and studio, or atelier, teaching methods.
There are also broader forms of higher education in design studies and design thinking. Design is also a part of general education, for example within the curriculum topic, Design and Technology. The development of design in general education in the 1970s created a need to identify fundamental aspects of 'designerly' ways of knowing, thinking, and acting, which resulted in establishing design as a distinct discipline of study.
Design process
Substantial disagreement exists concerning how designers in many fields, whether amateur or professional, alone or in teams, produce designs. Design researchers Dorst and Dijkhuis acknowledged that "there are many ways of describing design processes," and compare and contrast two dominant but different views of the design process: as a rational problem-solving process and as a process of reflection-in-action. They suggested that these two paradigms "represent two fundamentally different ways of looking at the world positivism and constructionism." The paradigms may reflect differing views of how designing should be done and how it actually is done, and both have a variety of names. The problem-solving view has been called "the rational model," "technical rationality" and "the reason-centric perspective." The alternative view has been called "reflection-in-action," "coevolution" and "the action-centric perspective."
Rational model
The rational model was independently developed by Herbert A. Simon, an American scientist, and two German engineering design theorists, Gerhard Pahl and Wolfgang Beitz. It posits that:
Designers attempt to optimize a design candidate for known constraints and objectives.
The design process is plan-driven.
The design process is understood in terms of a discrete sequence of stages.
The rational model is based on a rationalist philosophy and underlies the waterfall model, systems development life cycle, and much of the engineering design literature. According to the rationalist philosophy, design is informed by research and knowledge in a predictable and controlled manner.
Typical stages consistent with the rational model include the following:
Pre-production design
Design brief – initial statement of intended outcome.
Analysis – analysis of design goals.
Research – investigating similar designs in the field or related topics.
Specification – specifying requirements of a design for a product (product design specification) or service.
Problem solving – conceptualizing and documenting designs.
Presentation – presenting designs.
Design during production.
Development – continuation and improvement of a design.
Product testing – in situ testing of a design.
Post-production design feedback for future designs.
Implementation – introducing the design into the environment.
Evaluation and conclusion – summary of process and results, including constructive criticism and suggestions for future improvements.
Redesign – any or all stages in the design process repeated (with corrections made) at any time before, during, or after production.
Each stage has many associated best practices.
Criticism of the rational model
The rational model has been widely criticized on two primary grounds:
Designers do not work this way – extensive empirical evidence has demonstrated that designers do not act as the rational model suggests.
Unrealistic assumptions – goals are often unknown when a design project begins, and the requirements and constraints continue to change.
Action-centric model
The action-centric perspective is a label given to a collection of interrelated concepts, which are antithetical to the rational model. It posits that:
Designers use creativity and emotion to generate design candidates.
The design process is improvised.
No universal sequence of stages is apparent – analysis, design, and implementation are contemporary and inextricably linked.
The action-centric perspective is based on an empiricist philosophy and broadly consistent with the agile approach and methodical development. Substantial empirical evidence supports the veracity of this perspective in describing the actions of real designers. Like the rational model, the action-centric model sees design as informed by research and knowledge.
At least two views of design activity are consistent with the action-centric perspective. Both involve these three basic activities:
In the reflection-in-action paradigm, designers alternate between "framing", "making moves", and "evaluating moves". "Framing" refers to conceptualizing the problem, i.e., defining goals and objectives. A "move" is a tentative design decision. The evaluation process may lead to further moves in the design.
In the sensemaking–coevolution–implementation framework, designers alternate between its three titular activities. Sensemaking includes both framing and evaluating moves. Implementation is the process of constructing the design object. Coevolution is "the process where the design agent simultaneously refines its mental picture of the design object based on its mental picture of the context, and vice versa".
The concept of the design cycle is understood as a circular time structure, which may start with the thinking of an idea, then expressing it by the use of visual or verbal means of communication (design tools), the sharing and perceiving of the expressed idea, and finally starting a new cycle with the critical rethinking of the perceived idea. Anderson points out that this concept emphasizes the importance of the means of expression, which at the same time are means of perception of any design ideas.
Philosophies
Philosophy of design is the study of definitions, assumptions, foundations, and implications of design. There are also many informal 'philosophies' for guiding design such as personal values or preferred approaches.
Approaches to design
Some of these values and approaches include:
Critical design uses designed artefacts as an embodied critique or commentary on existing values, morals, and practices in a culture. Critical design can make aspects of the future physically present to provoke a reaction.
Ecological design is a design approach that prioritizes the consideration of the environmental impacts of a product or service, over its whole lifecycle. Ecodesign research focuses primarily on barriers to implementation, ecodesign tools and methods, and the intersection of ecodesign with other research disciplines.
Participatory design (originally co-operative design, now often co-design) is the practice of collective creativity to design, attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end-users) in the design process to help ensure the result meets their needs and is usable. Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own.
Scientific design refers to industrialised design based on scientific knowledge. Science can be used to study the effects and need for a potential or existing product in general and to design products that are based on scientific knowledge. For instance, a scientific design of face masks for COVID-19 mitigation may be based on investigations of filtration performance, mitigation performance, thermal comfort, biodegradability and flow resistance.
Service design is a term that is used for designing or organizing the experience around a product and the service associated with a product's use. The purpose of service design methodologies is to establish the most effective practices for designing services, according to both the needs of users and the competencies and capabilities of service providers.
Sociotechnical system design, a philosophy and tools for participative designing of work arrangements and supporting processes – for organizational purpose, quality, safety, economics, and customer requirements in core work processes, the quality of peoples experience at work, and the needs of society.
Transgenerational design, the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living.
User-centered design, which focuses on the needs, wants, and limitations of the end-user of the designed artefact. One aspect of user-centered design is ergonomics.
Relationship with the arts
The boundaries between art and design are blurry, largely due to a range of applications both for the term 'art' and the term 'design'. Applied arts can include industrial design, graphic design, fashion design, and the decorative arts which traditionally includes craft objects. In graphic arts (2D image making that ranges from photography to illustration), the distinction is often made between fine art and commercial art, based on the context within which the work is produced and how it is traded.
Types of designing
See also
References
Further reading
Margolin, Victor. World History of Design. New York: Bloomsbury Academic, 2015. (2 vols) .
Raizman, David Seth (12 November 2003). The History of Modern Design. Pearson. .
Design studies
Aesthetics
Structure
Human activities
Engineering disciplines | Design | [
"Engineering",
"Biology"
] | 2,446 | [
"Human activities",
"Behavior",
"Design studies",
"nan",
"Design",
"Human behavior"
] |
8,562 | https://en.wikipedia.org/wiki/Differential%20topology | In mathematics, differential topology is the field dealing with the topological properties and smooth properties of smooth manifolds. In this sense differential topology is distinct from the closely related field of differential geometry, which concerns the geometric properties of smooth manifolds, including notions of size, distance, and rigid shape. By comparison differential topology is concerned with coarser properties, such as the number of holes in a manifold, its homotopy type, or the structure of its diffeomorphism group. Because many of these coarser properties may be captured algebraically, differential topology has strong links to algebraic topology.
The central goal of the field of differential topology is the classification of all smooth manifolds up to diffeomorphism. Since dimension is an invariant of smooth manifolds up to diffeomorphism type, this classification is often studied by classifying the (connected) manifolds in each dimension separately:
In dimension 1, the only smooth manifolds up to diffeomorphism are the circle, the real number line, and allowing a boundary, the half-closed interval and fully closed interval .
In dimension 2, every closed surface is classified up to diffeomorphism by its genus, the number of holes (or equivalently its Euler characteristic), and whether or not it is orientable. This is the famous classification of closed surfaces. Already in dimension two the classification of non-compact surfaces becomes difficult, due to the existence of exotic spaces such as Jacob's ladder.
In dimension 3, William Thurston's geometrization conjecture, proven by Grigori Perelman, gives a partial classification of compact three-manifolds. Included in this theorem is the Poincaré conjecture, which states that any closed, simply connected three-manifold is homeomorphic (and in fact diffeomorphic) to the 3-sphere.
Beginning in dimension 4, the classification becomes much more difficult for two reasons. Firstly, every finitely presented group appears as the fundamental group of some 4-manifold, and since the fundamental group is a diffeomorphism invariant, this makes the classification of 4-manifolds at least as difficult as the classification of finitely presented groups. By the word problem for groups, which is equivalent to the halting problem, it is impossible to classify such groups, so a full topological classification is impossible. Secondly, beginning in dimension four it is possible to have smooth manifolds that are homeomorphic, but with distinct, non-diffeomorphic smooth structures. This is true even for the Euclidean space , which admits many exotic structures. This means that the study of differential topology in dimensions 4 and higher must use tools genuinely outside the realm of the regular continuous topology of topological manifolds. One of the central open problems in differential topology is the four-dimensional smooth Poincaré conjecture, which asks if every smooth 4-manifold that is homeomorphic to the 4-sphere, is also diffeomorphic to it. That is, does the 4-sphere admit only one smooth structure? This conjecture is true in dimensions 1, 2, and 3, by the above classification results, but is known to be false in dimension 7 due to the Milnor spheres.
Important tools in studying the differential topology of smooth manifolds include the construction of smooth topological invariants of such manifolds, such as de Rham cohomology or the intersection form, as well as smoothable topological constructions, such as smooth surgery theory or the construction of cobordisms. Morse theory is an important tool which studies smooth manifolds by considering the critical points of differentiable functions on the manifold, demonstrating how the smooth structure of the manifold enters into the set of tools available. Oftentimes more geometric or analytical techniques may be used, by equipping a smooth manifold with a Riemannian metric or by studying a differential equation on it. Care must be taken to ensure that the resulting information is insensitive to this choice of extra structure, and so genuinely reflects only the topological properties of the underlying smooth manifold. For example, the Hodge theorem provides a geometric and analytical interpretation of the de Rham cohomology, and gauge theory was used by Simon Donaldson to prove facts about the intersection form of simply connected 4-manifolds. In some cases techniques from contemporary physics may appear, such as topological quantum field theory, which can be used to compute topological invariants of smooth spaces.
Famous theorems in differential topology include the Whitney embedding theorem, the hairy ball theorem, the Hopf theorem, the Poincaré–Hopf theorem, Donaldson's theorem, and the Poincaré conjecture.
Description
Differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are 'softer' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume.
On the other hand, smooth manifolds are more rigid than the topological manifolds. John Milnor discovered that some spheres have more than one smooth structure—see Exotic sphere and Donaldson's theorem. Michel Kervaire exhibited topological manifolds with no smooth structure at all. Some constructions of smooth manifold theory, such as the existence of tangent bundles, can be done in the topological setting with much more work, and others cannot.
One of the main topics in differential topology is the study of special kinds of smooth mappings between manifolds, namely immersions and submersions, and the intersections of submanifolds via transversality. More generally one is interested in properties and invariants of smooth manifolds that are carried over by diffeomorphisms, another special kind of smooth mapping. Morse theory is another branch of differential topology, in which topological information about a manifold is deduced from changes in the rank of the Jacobian of a function.
For a list of differential topology topics, see the following reference: List of differential geometry topics.
Differential topology versus differential geometry
Differential topology and differential geometry are first characterized by their similarity. They both study primarily the properties of differentiable manifolds, sometimes with a variety of structures imposed on them.
One major difference lies in the nature of the problems that each subject tries to address. In one view, differential topology distinguishes itself from differential geometry by studying primarily those problems that are inherently global. Consider the example of a coffee cup and a donut. From the point of view of differential topology, the donut and the coffee cup are the same (in a sense). This is an inherently global view, though, because there is no way for the differential topologist to tell whether the two objects are the same (in this sense) by looking at just a tiny (local) piece of either of them. They must have access to each entire (global) object.
From the point of view of differential geometry, the coffee cup and the donut are different because it is impossible to rotate the coffee cup in such a way that its configuration matches that of the donut. This is also a global way of thinking about the problem. But an important distinction is that the geometer does not need the entire object to decide this. By looking, for instance, at just a tiny piece of the handle, they can decide that the coffee cup is different from the donut because the handle is thinner (or more curved) than any piece of the donut.
To put it succinctly, differential topology studies structures on manifolds that, in a sense, have no interesting local structure. Differential geometry studies structures on manifolds that do have an interesting local (or sometimes even infinitesimal) structure.
More mathematically, for example, the problem of constructing a diffeomorphism between two manifolds of the same dimension is inherently global since locally two such manifolds are always diffeomorphic. Likewise, the problem of computing a quantity on a manifold that is invariant under differentiable mappings is inherently global, since any local invariant will be trivial in the sense that it is already exhibited in the topology of . Moreover, differential topology does not restrict itself necessarily to the study of diffeomorphism. For example, symplectic topology—a subbranch of differential topology—studies global properties of symplectic manifolds. Differential geometry concerns itself with problems—which may be local or global—that always have some non-trivial local properties. Thus differential geometry may study differentiable manifolds equipped with a connection, a metric (which may be Riemannian, pseudo-Riemannian, or Finsler), a special sort of distribution (such as a CR structure), and so on.
This distinction between differential geometry and differential topology is blurred, however, in questions specifically pertaining to local diffeomorphism invariants such as the tangent space at a point. Differential topology also deals with questions like these, which specifically pertain to the properties of differentiable mappings on (for example the tangent bundle, jet bundles, the Whitney extension theorem, and so forth).
The distinction is concise in abstract terms:
Differential topology is the study of the (infinitesimal, local, and global) properties of structures on manifolds that have only trivial local moduli.
Differential geometry is such a study of structures on manifolds that have one or more non-trivial local moduli.
See also
List of differential geometry topics
Glossary of differential geometry and topology
Important publications in differential geometry
Important publications in differential topology
Basic introduction to the mathematics of curved spacetime
Notes
References
External links | Differential topology | [
"Mathematics"
] | 1,983 | [
"Topology",
"Differential topology"
] |
8,564 | https://en.wikipedia.org/wiki/Diffeomorphism | In mathematics, a diffeomorphism is an isomorphism of differentiable manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are continuously differentiable.
Definition
Given two differentiable manifolds and , a differentiable map is a diffeomorphism if it is a bijection and its inverse is differentiable as well. If these functions are times continuously differentiable, is called a -diffeomorphism.
Two manifolds and are diffeomorphic (usually denoted ) if there is a diffeomorphism from to . Two -differentiable manifolds are -diffeomorphic if there is an times continuously differentiable bijective map between them whose inverse is also times continuously differentiable.
Diffeomorphisms of subsets of manifolds
Given a subset of a manifold and a subset of a manifold , a function is said to be smooth if for all in there is a neighborhood of and a smooth function such that the restrictions agree: (note that is an extension of ). The function is said to be a diffeomorphism if it is bijective, smooth and its inverse is smooth.
Local description
Testing whether a differentiable map is a diffeomorphism can be made locally under some mild restrictions. This is the Hadamard-Caccioppoli theorem:
If , are connected open subsets of such that is simply connected, a differentiable map is a diffeomorphism if it is proper and if the differential is bijective (and hence a linear isomorphism) at each point in .
Some remarks:
It is essential for to be simply connected for the function to be globally invertible (under the sole condition that its derivative be a bijective map at each point). For example, consider the "realification" of the complex square function
Then is surjective and it satisfies
Thus, though is bijective at each point, is not invertible because it fails to be injective (e.g. ).
Since the differential at a point (for a differentiable function)
is a linear map, it has a well-defined inverse if and only if is a bijection. The matrix representation of is the matrix of first-order partial derivatives whose entry in the -th row and -th column is . This so-called Jacobian matrix is often used for explicit computations.
Diffeomorphisms are necessarily between manifolds of the same dimension. Imagine going from dimension to dimension . If then could never be surjective, and if then could never be injective. In both cases, therefore, fails to be a bijection.
If is a bijection at then is said to be a local diffeomorphism (since, by continuity, will also be bijective for all sufficiently close to ).
Given a smooth map from dimension to dimension , if (or, locally, ) is surjective, is said to be a submersion (or, locally, a "local submersion"); and if (or, locally, ) is injective, is said to be an immersion (or, locally, a "local immersion").
A differentiable bijection is not necessarily a diffeomorphism. , for example, is not a diffeomorphism from to itself because its derivative vanishes at 0 (and hence its inverse is not differentiable at 0). This is an example of a homeomorphism that is not a diffeomorphism.
When is a map between differentiable manifolds, a diffeomorphic is a stronger condition than a homeomorphic . For a diffeomorphism, and its inverse need to be differentiable; for a homeomorphism, and its inverse need only be continuous. Every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism.
is a diffeomorphism if, in coordinate charts, it satisfies the definition above. More precisely: Pick any cover of by compatible coordinate charts and do the same for . Let and be charts on, respectively, and , with and as, respectively, the images of and . The map is then a diffeomorphism as in the definition above, whenever .
Examples
Since any manifold can be locally parametrised, we can consider some explicit maps from into .
Let
We can calculate the Jacobian matrix:
The Jacobian matrix has zero determinant if and only if . We see that could only be a diffeomorphism away from the -axis and the -axis. However, is not bijective since , and thus it cannot be a diffeomorphism.
Let
where the and are arbitrary real numbers, and the omitted terms are of degree at least two in x and y. We can calculate the Jacobian matrix at 0:
We see that g is a local diffeomorphism at 0 if, and only if,
i.e. the linear terms in the components of g are linearly independent as polynomials.
Let
We can calculate the Jacobian matrix:
The Jacobian matrix has zero determinant everywhere! In fact we see that the image of h is the unit circle.
Surface deformations
In mechanics, a stress-induced transformation is called a deformation and may be described by a diffeomorphism.
A diffeomorphism between two surfaces and has a Jacobian matrix that is an invertible matrix. In fact, it is required that for in , there is a neighborhood of in which the Jacobian stays non-singular. Suppose that in a chart of the surface,
The total differential of u is
, and similarly for v.
Then the image is a linear transformation, fixing the origin, and expressible as the action of a complex number of a particular type. When (dx, dy) is also interpreted as that type of complex number, the action is of complex multiplication in the appropriate complex number plane. As such, there is a type of angle (Euclidean, hyperbolic, or slope) that is preserved in such a multiplication. Due to Df being invertible, the type of complex number is uniform over the surface. Consequently, a surface deformation or diffeomorphism of surfaces has the conformal property of preserving (the appropriate type of) angles.
Diffeomorphism group
Let be a differentiable manifold that is second-countable and Hausdorff. The diffeomorphism group of is the group of all diffeomorphisms of to itself, denoted by or, when is understood, . This is a "large" group, in the sense that—provided is not zero-dimensional—it is not locally compact.
Topology
The diffeomorphism group has two natural topologies: weak and strong . When the manifold is compact, these two topologies agree. The weak topology is always metrizable. When the manifold is not compact, the strong topology captures the behavior of functions "at infinity" and is not metrizable. It is, however, still Baire.
Fixing a Riemannian metric on , the weak topology is the topology induced by the family of metrics
as varies over compact subsets of . Indeed, since is -compact, there is a sequence of compact subsets whose union is . Then:
The diffeomorphism group equipped with its weak topology is locally homeomorphic to the space of vector fields . Over a compact subset of , this follows by fixing a Riemannian metric on and using the exponential map for that metric. If is finite and the manifold is compact, the space of vector fields is a Banach space. Moreover, the transition maps from one chart of this atlas to another are smooth, making the diffeomorphism group into a Banach manifold with smooth right translations; left translations and inversion are only continuous. If , the space of vector fields is a Fréchet space. Moreover, the transition maps are smooth, making the diffeomorphism group into a Fréchet manifold and even into a regular Fréchet Lie group. If the manifold is -compact and not compact the full diffeomorphism group is not locally contractible for any of the two topologies. One has to restrict the group by controlling the deviation from the identity near infinity to obtain a diffeomorphism group which is a manifold; see .
Lie algebra
The Lie algebra of the diffeomorphism group of consists of all vector fields on equipped with the Lie bracket of vector fields. Somewhat formally, this is seen by making a small change to the coordinate at each point in space:
so the infinitesimal generators are the vector fields
Examples
When is a Lie group, there is a natural inclusion of in its own diffeomorphism group via left-translation. Let denote the diffeomorphism group of , then there is a splitting , where is the subgroup of that fixes the identity element of the group.
The diffeomorphism group of Euclidean space consists of two components, consisting of the orientation-preserving and orientation-reversing diffeomorphisms. In fact, the general linear group is a deformation retract of the subgroup of diffeomorphisms fixing the origin under the map . In particular, the general linear group is also a deformation retract of the full diffeomorphism group.
For a finite set of points, the diffeomorphism group is simply the symmetric group. Similarly, if is any manifold there is a group extension . Here is the subgroup of that preserves all the components of , and is the permutation group of the set (the components of ). Moreover, the image of the map is the bijections of that preserve diffeomorphism classes.
Transitivity
For a connected manifold , the diffeomorphism group acts transitively on . More generally, the diffeomorphism group acts transitively on the configuration space . If is at least two-dimensional, the diffeomorphism group acts transitively on the configuration space and the action on is multiply transitive .
Extensions of diffeomorphisms
In 1926, Tibor Radó asked whether the harmonic extension of any homeomorphism or diffeomorphism of the unit circle to the unit disc yields a diffeomorphism on the open disc. An elegant proof was provided shortly afterwards by Hellmuth Kneser. In 1945, Gustave Choquet, apparently unaware of this result, produced a completely different proof.
The (orientation-preserving) diffeomorphism group of the circle is pathwise connected. This can be seen by noting that any such diffeomorphism can be lifted to a diffeomorphism of the reals satisfying ; this space is convex and hence path-connected. A smooth, eventually constant path to the identity gives a second more elementary way of extending a diffeomorphism from the circle to the open unit disc (a special case of the Alexander trick). Moreover, the diffeomorphism group of the circle has the homotopy-type of the orthogonal group .
The corresponding extension problem for diffeomorphisms of higher-dimensional spheres was much studied in the 1950s and 1960s, with notable contributions from René Thom, John Milnor and Stephen Smale. An obstruction to such extensions is given by the finite abelian group , the "group of twisted spheres", defined as the quotient of the abelian component group of the diffeomorphism group by the subgroup of classes extending to diffeomorphisms of the ball .
Connectedness
For manifolds, the diffeomorphism group is usually not connected. Its component group is called the mapping class group. In dimension 2 (i.e. surfaces), the mapping class group is a finitely presented group generated by Dehn twists; this has been proved by Max Dehn, W. B. R. Lickorish, and Allen Hatcher). Max Dehn and Jakob Nielsen showed that it can be identified with the outer automorphism group of the fundamental group of the surface.
William Thurston refined this analysis by classifying elements of the mapping class group into three types: those equivalent to a periodic diffeomorphism; those equivalent to a diffeomorphism leaving a simple closed curve invariant; and those equivalent to pseudo-Anosov diffeomorphisms. In the case of the torus , the mapping class group is simply the modular group and the classification becomes classical in terms of elliptic, parabolic and hyperbolic matrices. Thurston accomplished his classification by observing that the mapping class group acted naturally on a compactification of Teichmüller space; as this enlarged space was homeomorphic to a closed ball, the Brouwer fixed-point theorem became applicable. Smale conjectured that if is an oriented smooth closed manifold, the identity component of the group of orientation-preserving diffeomorphisms is simple. This had first been proved for a product of circles by Michel Herman; it was proved in full generality by Thurston.
Homotopy types
The diffeomorphism group of has the homotopy-type of the subgroup . This was proven by Steve Smale.
The diffeomorphism group of the torus has the homotopy-type of its linear automorphisms: .
The diffeomorphism groups of orientable surfaces of genus have the homotopy-type of their mapping class groups (i.e. the components are contractible).
The homotopy-type of the diffeomorphism groups of 3-manifolds are fairly well understood via the work of Ivanov, Hatcher, Gabai and Rubinstein, although there are a few outstanding open cases (primarily 3-manifolds with finite fundamental groups).
The homotopy-type of diffeomorphism groups of -manifolds for are poorly understood. For example, it is an open problem whether or not has more than two components. Via Milnor, Kahn and Antonelli, however, it is known that provided , does not have the homotopy-type of a finite CW-complex.
Homeomorphism and diffeomorphism
Since every diffeomorphism is a homeomorphism, given a pair of manifolds which are diffeomorphic to each other they are in particular homeomorphic to each other. The converse is not true in general.
While it is easy to find homeomorphisms that are not diffeomorphisms, it is more difficult to find a pair of homeomorphic manifolds that are not diffeomorphic. In dimensions 1, 2 and 3, any pair of homeomorphic smooth manifolds are diffeomorphic. In dimension 4 or greater, examples of homeomorphic but not diffeomorphic pairs exist. The first such example was constructed by John Milnor in dimension 7. He constructed a smooth 7-dimensional manifold (called now Milnor's sphere) that is homeomorphic to the standard 7-sphere but not diffeomorphic to it. There are, in fact, 28 oriented diffeomorphism classes of manifolds homeomorphic to the 7-sphere (each of them is the total space of a fiber bundle over the 4-sphere with the 3-sphere as the fiber).
More unusual phenomena occur for 4-manifolds. In the early 1980s, a combination of results due to Simon Donaldson and Michael Freedman led to the discovery of exotic : there are uncountably many pairwise non-diffeomorphic open subsets of each of which is homeomorphic to , and also there are uncountably many pairwise non-diffeomorphic differentiable manifolds homeomorphic to that do not embed smoothly in .
See also
Anosov diffeomorphism such as Arnold's cat map
Diffeo anomaly also known as a gravitational anomaly, a type anomaly in quantum mechanics
Diffeology, smooth parameterizations on a set, which makes a diffeological space
Diffeomorphometry, metric study of shape and form in computational anatomy
Étale morphism
Large diffeomorphism
Local diffeomorphism
Superdiffeomorphism
Notes
References
Mathematical physics | Diffeomorphism | [
"Physics",
"Mathematics"
] | 3,374 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
8,578 | https://en.wikipedia.org/wiki/Dyne | The dyne (symbol: dyn; ) is a derived unit of force specified in the centimetre–gram–second (CGS) system of units, a predecessor of the modern SI.
History
The name dyne was first proposed as a CGS unit of force in 1873 by a Committee of the British Association for the Advancement of Science.
Definition
The dyne is defined as "the force required to accelerate a mass of one gram at a rate of one centimetre per second squared". An equivalent definition of the dyne is "that force which, acting for one second, will produce a change of velocity of one centimetre per second in a mass of one gram".
One dyne is equal to 10 micronewtons, 10−5 N or to 10 nsn (nanosthenes) in the old metre–tonne–second system of units.
1 dyn = 1 g⋅cm/s2 = 10−5 kg⋅m/s2 = 10−5 N
1 N = 1 kg⋅m/s2 = 105 g⋅cm/s2 = 105 dyn
Use
The dyne per centimetre is a unit traditionally used to measure surface tension. For example, the surface tension of distilled water is 71.99 dyn/cm at 25 °C (77 °F). (In SI units this is or .)
See also
Centimetre–gram–second system of units
Erg
References
Centimetre–gram–second system of units | Dyne | [
"Physics",
"Mathematics"
] | 310 | [
"Force",
"Physical quantities",
"Quantity",
"Units of force",
"Units of measurement"
] |
8,586 | https://en.wikipedia.org/wiki/Dyson%20sphere | A Dyson sphere is a hypothetical megastructure that encompasses a star and captures a large percentage of its power output. The concept is a thought experiment that attempts to imagine how a spacefaring civilization would meet its energy requirements once those requirements exceed what can be generated from the home planet's resources alone. Because only a tiny fraction of a star's energy emissions reaches the surface of any orbiting planet, building structures encircling a star would enable a civilization to harvest far more energy.
The first modern imagining of such a structure was by Olaf Stapledon in his science fiction novel Star Maker (1937). The concept was later explored by the physicist Freeman Dyson in his 1960 paper "Search for Artificial Stellar Sources of Infrared Radiation". Dyson speculated that such structures would be the logical consequence of the escalating energy needs of a technological civilization and would be a necessity for its long-term survival. A signature of such spheres detected in astronomical searches would be an indicator of extraterrestrial intelligence.
Since Dyson's paper, many variant designs involving an artificial structure or series of structures to encompass a star have been proposed in exploratory engineering or described in science fiction, often under the name "Dyson sphere". Fictional depictions often describe a solid shell of matter enclosing a staran arrangement considered by Dyson himself to be impossible.
Origins
Inspired by the 1937 science fiction novel Star Maker by Olaf Stapledon, the physicist and mathematician Freeman Dyson was the first to formalize the concept of what became known as the "Dyson sphere" in his 1960 Science paper "Search for Artificial Stellar Sources of Infra-Red Radiation". Dyson theorized that as the energy requirements of an advanced technological civilization increased, there would come a time when it would need to systematically harvest the energy from its local star on a large scale. He speculated that this could be done via a system of structures orbiting the star, designed to intercept and collect its energy. He argued that as the structure would result in the large-scale conversion of starlight into far-infrared radiation, an earth-based search for sources of infrared radiation could identify stars supporting intelligent life.
Dyson did not detail how such a system could be constructed, simply referring to it in the paper as a "shell" or "biosphere". He later clarified that he did not have in mind a solid structure, saying: "A solid shell or ring surrounding a star is mechanically impossible. The form of 'biosphere' which I envisaged consists of a loose collection or swarm of objects traveling on independent orbits around the star." Such a concept has often been referred to as a Dyson swarm; however, in 2013, Dyson said he had come to regret that the concept had been named after him.
Search for megastructures
Dyson-style energy collectors around a distant star would absorb and re-radiate energy from the star. The wavelengths of such re-radiated energy may be atypical for the star's spectral type, due to the presence of heavy elements not naturally occurring within the star. If the percentage of such atypical wavelengths were to be significant, an alien megastructure could be detected at interstellar distances. This could indicate the presence of what has been called a TypeII Kardashev civilization.
SETI has looked for such infrared-heavy spectra from solar analogs, as has Fermilab. Fermilab discovered 17 potential "ambiguous" candidates, of which four were in 2006 called "amusing but still questionable". Later searches also resulted in several candidates, all of which remain unconfirmed.
On 14 October 2015, Planet Hunters' citizen scientists discovered unusual light fluctuations of the star KIC 8462852 raising press speculation that a Dyson sphere may have been discovered. However, subsequent analysis showed that the results were consistent with the presence of dust. A further campaign in 2024 identified seven possible candidates for Dyson-spheres, but further investigation was said to be required.
Feasibility and science-based speculation
Although Dyson sphere systems are theoretically possible, building a stable megastructure around the Sun is currently far beyond humanity's engineering capacity. The number of craft required to obtain, transmit, and maintain a complete Dyson sphere exceeds present-day industrial capabilities. George Dvorsky has advocated the use of self-replicating robots to overcome this limitation in the relatively near term. Some have suggested that Dyson sphere habitats could be built around white dwarfs and even pulsars.
Stellar engines are hypothetical megastructures whose purpose is to extract useful energy from a star, sometimes for specific purposes. For example, Matrioshka brains have been proposed to extract energy for computation, while Shkadov thrusters would extract energy for propulsion. Some proposed stellar engine designs are based on the Dyson sphere.
From May until June 2024, speculation grew that potential signs of interstellar Dyson spheres had been discovered. The seven objects of interestall located within a thousand light-years of Earthare M-dwarfs, a class of stars that are smaller and less luminous than the Sun. However, the authors of the findings were careful not to make any overblown claims. Despite this, many media outlets picked up on the story. Less fantastical alternative explanations have been made, including a proposal that the infrared from the discoveries was caused by distant dust-obscured galaxies.
Fictional examples
A precursor to the concept of Dyson spheres was featured in the 1937 novel Star Maker by Olaf Stapledon, in which he described "every solar system... surrounded by a gauze of light-traps, which focused the escaping solar energy for intelligent use"; Dyson got his inspiration from this book and suggested that "Stapledon sphere" would be a more apt name for the concept. Fictional Dyson spheres are typically solid structures forming a continuous shell around the star in question, although Dyson himself considered that prospect to be mechanically implausible. They are sometimes used as the type of plot device known as a Big Dumb Object.
Dyson spheres appear as a background element in many works of fiction, including the 1964 novel The Wanderer by Fritz Leiber where aliens enclose multiple stars in this way. Dyson spheres are depicted in the 1975–1983 book series Saga of Cuckoo by Frederik Pohl and Jack Williamson, and one functions as the setting of Bob Shaw's 1975 novel Orbitsville and its sequels. In the 1992 episode "Relics" of the TV show Star Trek: The Next Generation, the finds itself trapped in an abandoned Dyson Sphere; in a 2011 interview, Dyson said that he enjoyed the episode, although he considered the sphere depicted to be "nonsense".
Michael Jan Friedman who wrote the novelization observed that in the TV episode itself the Dyson sphere was effectively a MacGuffin, with "just nothing about it" in the story, and decided to flesh out the plot element in his novelization.
Other science-fiction story examples include Tony Rothman's The World Is Round, Somtow Sucharitkul's Inquisitor series, Timothy Zahn's Spinneret, James White's Federation World, Stephen Baxter's The Time Ships, and Peter F. Hamilton's Pandora's Star. Variations on the Dyson Sphere concept include a single circular band in Larry Niven's 1970 novel Ringworld, a half sphere in the 2012 novel Bowl of Heaven by Gregory Benford and Niven, and nested spheresalso known as a Matrioshka brainin Colin Kapp's 1980s Cageworld series and Brian Stableford's 1979–1990 Asgard trilogy.
Stableford himself observed that Dyson spheres are usually MacGuffins or largely deep in the backgrounds of stories, giving as examples Fritz Leiber's The Wanderer and Linda Nagata's Deception Well, whereas stories involving space exploration tend to employ the variants like Niven's Ringworld.He gives two reasons for this: firstly that Dyson spheres are simply too big to address, which Friedman also alluded to when pointing out that the reason his novelization of "Relics" did not go further into the sphere was that it was only four hundred pages and he had just shy of four weeks to write it; and secondly that, especially for hard science-fiction, Dyson spheres have certain engineering problems that complicate stories. In particular, since gravitational attraction is in equilibrium inside such a sphere (per the shell theorem), other means such as rotating the sphere have to be employed in order to keep things attached to the interior surface, which then leads to the problem of a gravity gradient that goes to zero at the rotational poles. Authors address this with various modifications of the idea such as the aforementioned Cageworld nesting, Dan Alderson's double sphere idea, and Niven's reduced Ringworld (discussed in "Bigger Than Worlds").
See also
References
Further reading
External links
Dyson sphere FAQ
FermiLab: IRAS-based whole sky upper limit on Dyson spheres with an appendix on Dyson sphere engineering
Astronomy projects
Energy development
Exploratory engineering
Freeman Dyson
History of science
Hypothetical astronomical objects
Hypothetical technology
Megastructures
Philosophy of science
Philosophy of technology
Proposed space stations
Science fiction themes
Search for extraterrestrial intelligence
Solar power
Space colonization
Thought experiments | Dyson sphere | [
"Astronomy",
"Technology"
] | 1,906 | [
"Exploratory engineering",
"Astronomical hypotheses",
"History of science",
"Philosophy of technology",
"Astronomical myths",
"Science and technology studies",
"Hypothetical astronomical objects",
"Megastructures",
"Astronomy projects",
"Astronomical objects",
"History of science and technology"... |
8,587 | https://en.wikipedia.org/wiki/Democide | Democide refers to "the intentional killing of an unarmed or disarmed person by government agents acting in their authoritative capacity and pursuant to government policy or high command." The term was first coined by Holocaust historian and statistics expert, R.J. Rummel in his book Death by Government, but has also been described as a better term than genocide to refer to certain types of mass killings, by renowned Holocaust historian Yehuda Bauer. According to Rummel, this definition covers a wide range of deaths, including forced labor and concentration camp victims, extrajudicial summary killings, and mass deaths due to governmental acts of criminal omission and neglect, such as in deliberate famines like the Holodomor, as well as killings by de facto governments, for example, killings during a civil war. This definition covers any murder of any number of persons by any government.
Rummel created democide as an extended term to include forms of government murder not covered by genocide. According to Rummel, democide surpassed war as the leading cause of non-natural death in the 20th century.
Definition
Democide is the murder of any person or people by their government, including genocide, politicide, and mass murder. Democide is not necessarily the elimination of entire cultural groups but rather groups within the country that the government feels need to be eradicated for political reasons and due to claimed future threats.
According to Rummel, genocide has three different meanings. The ordinary meaning is murder by government of people due to their national, ethnic, racial or religious group membership. The legal meaning of genocide refers to the international treaty on genocide, the Convention on the Prevention and Punishment of the Crime of Genocide. This also includes nonlethal acts that in the end eliminate or greatly hinder the group. Looking back on history, one can see the different variations of democides that have occurred, but it still consists of acts of killing or mass murder. The generalized meaning of genocide is similar to the ordinary meaning but also includes government killings of political opponents or otherwise intentional murder. In order to avoid confusion over which meaning is intended, Rummel created democide for this third meaning.
In "How Many Did Communist Regimes Murder?", Rummel wrote:
First, however, I should clarify the term democide. It means for governments what murder means for an individual under municipal law. It is the premeditated killing of a person in cold blood, or causing the death of a person through reckless and wanton disregard for their life. Thus, a government incarcerating people in a prison under such deadly conditions that they die in a few years is murder by the state—democide—as would parents letting a child die from malnutrition and exposure be murder. So would government forced labor that kills a person within months or a couple of years be murder. So would government created famines that then are ignored or knowingly aggravated by government action be murder of those who starve to death. And obviously, extrajudicial executions, death by torture, government massacres, and all genocidal killing be murder. However, judicial executions for crimes that internationally would be considered capital offenses, such as for murder or treason (as long as it is clear that these are not fabricated for the purpose of executing the accused, as in communist show trials), are not democide. Nor is democide the killing of enemy soldiers in combat or of armed rebels, nor of noncombatants as a result of military action against military targets.
In his work and research, Rummel distinguished between colonial, democratic, and authoritarian and totalitarian regimes. He defined totalitarianism as follows:
There is much confusion about what is meant by totalitarian in the literature, including the denial that such systems even exist. I define a totalitarian state as one with a system of government that is unlimited constitutionally or by countervailing powers in society (such as by a church, rural gentry, labor unions, or regional powers); is not held responsible to the public by periodic secret and competitive elections; and employs its unlimited power to control all aspects of society, including the family, religion, education, business, private property, and social relationships. Under Stalin, the Soviet Union was thus totalitarian, as was Mao's China, Pol Pot's Cambodia, Hitler's Germany, and U Ne Win's Burma. Totalitarianism is then a political ideology for which a totalitarian government is the agency for realizing its ends. Thus, totalitarianism characterizes such ideologies as state socialism (as in Burma), Marxism-Leninism as in former East Germany, and Nazism. Even revolutionary Moslem Iran since the overthrow of the Shah in 1978–79 has been totalitarian—here totalitarianism was married to Moslem fundamentalism. In short, totalitarianism is the ideology of absolute power. State socialism, communism, Nazism, fascism, and Moslem fundamentalism have been some of its recent raiments. Totalitarian governments have been its agency. The state, with its international legal sovereignty and independence, has been its base. As will be pointed out, mortacracy is the result.
Estimates
In his estimates, Rudolph Rummel relied mostly on historical accounts, an approach that rarely provides accuracy compared with contemporary academic opinion. In the case of Mexican democide, Rummel wrote that while "these figures amount to little more than informed guesses", he thought "there is enough evidence to at least indict these authoritarian regimes for megamurder." According to Rummel, his research showed that the death toll from democide is far greater than the death toll from war. After studying over 8,000 reports of government-caused deaths, Rummel estimated that there have been 262 million victims of democide in the last century. According to his figures, six times as many people have died from the actions of people working for governments than have died in battle. One of his main findings was that democracies have much less democide than authoritarian regimes. Rummel argued that there is a relation between political power and democide. Political mass murder grows increasingly common as political power becomes unconstrained. At the other end of the scale, where power is diffuse, checked, and balanced, political violence is a rarity. According to Rummel, "[t]he more power a regime has, the more likely people will be killed. This is a major reason for promoting freedom." Rummel argued that "concentrated political power is the most dangerous thing on earth."
Rummel's estimates, especially about Communist democide, typically included a wide range and cannot be considered determinative. Rummel calculated nearly 43 million deaths due to democide inside and outside the Soviet Union during Stalin's regime. This is much higher than an often quoted figure in the popular press of 20 million, or a 2010s scholarly figure of 9 million. Rummel responded that the 20 million estimate is based on a figure from Robert Conquest's The Great Terror and that Conquest's qualifier "almost certainly too low" is usually forgotten. For Rummell, Conquest's calculations excluded camp deaths before 1936 and after 1950, executions (1939–1953), the forced population transfer in the Soviet Union (1939–1953), the deportation within the Soviet Union of minorities (1941–1944), and those the Soviet Red Army and Cheka (the secret police) executed throughout Eastern Europe after their conquest during the 1944–1945 period. Moreover, the Holodomor that killed 5 million in 1932–1934 (according to Rummel) is also not included. According to Rummel, forced labor, executions, and concentration camps were responsible for over one million deaths in the Democratic People's Republic of Korea from 1948 to 1987. After decades of research in the state archives, most scholars say that Stalin's regime killed between 6 and 9 million, which is considerably less than originally thought, while Nazi Germany killed at least 11 million, which is in line with previous estimates.
Application
Authoritarian and totalitarian regimes
Communist regimes
Rummel applied the concept of democide to Communist regimes. In 1987, Rudolph Rummel's book Death by Government Rummel estimated that 148 million were killed by Communist governments from 1917 to 1987. The list of Communist countries with more than 1 million estimated victims included:
China at 76,702,000 (1949–1987),
the Soviet Union at 61,911,000 (1917–1987),
Democratic Kampuchea (1975–1979) at 2,035,000,
Vietnam (1945–1987) at 1,670,000,
Poland (1945–1987) at 1,585,000,
North Korea (1948–1987) at 1,563,000,
Yugoslavia (1945–1987) at 1,072,000.
In 1993, Rummel wrote: "Even were we to have total access to all communist archives we still would not be able to calculate precisely how many the communists murdered. Consider that even in spite of the archival statistics and detailed reports of survivors, the best experts still disagree by over 40 percent on the total number of Jews killed by the Nazis. We cannot expect near this accuracy for the victims of communism. We can, however, get a probable order of magnitude and a relative approximation of these deaths within a most likely range." In 1994, Rummel updated his estimates for Communist regimes at about 110 million people, foreign and domestic, killed by Communist democide from 1900 to 1987. Due to additional information about Mao Zedong's culpability in the Great Chinese Famine according to Mao: The Unknown Story, a 2005 book authored by Jon Halliday and Jung Chang, Rummel revised upward his total for Communist democide to about 148 million, using their estimate of 38 million famine deaths.
Rummel's figures for Communist governments have been criticized for the methodology which he used to arrive at them, and they have also been criticized for being higher than the figures which have been given by most scholars (for example, The Black Book of Communism estimates the number of those killed in the USSR at 20 million).
Right-wing authoritarian, fascist, and feudal regimes
Estimates by Rummel for fascist or right-wing authoritarian regimes include:
Nazi Germany at 20,946,000 (1933–1945),
Nationalist China (1925–1949) and later Taiwan (1949–1987) at 10,214,000,
Empire of Japan at 5,964,000 (1900–1945).
Estimates for other regime-types include:
the Ottoman Empire at 1,883,000 (Armenian genocide and Greek genocide),
Pakistan at 1,503,000 (1971 Bangladesh genocide),
Porfiriato in Mexico at somewhere between 600,000 and 3,000,000 and closer to 1,417,000 (1900–1920),
the Russian Empire at 1,066,000 (1900–1917).
Democide in Communist and Nationalist China, Nazi Germany, and the Soviet Union are characterized by Rummel as deka-megamurderers (128,168,000), while those in Cambodia, Japan, Pakistan, Poland, Turkey, Vietnam, and Yugoslavia are characterized as the lesser megamurderers (19,178,000), and cases in Mexico, North Korea, and feudal Russia are characterized as suspected megamurderers (4,145,000). Rummel wrote that "even though the Nazis hardly matched the democide of the Soviets and Communist Chinese", they "proportionally killed more".
Colonial regimes
In response to David Stannard's figures about what he terms "the American Holocaust", Rummel estimated that over the centuries of European colonization about 2 million to 15 million American indigenous people were victims of democide, excluding military battles and unintentional deaths in Rummel's definition. Rummel wrote that "[e]ven if these figures are remotely true, then this still make this subjugation of the Americas one of the bloodier, centuries long, democides in world history."
Rummel stated that his estimate for those killed by colonialism is 50,000,000 persons in the 20th century; this was revised upwards from his initial estimate of 815,000 dead.
Democratic regimes
While democratic regimes are considered by Rummel to be the least likely to commit democide and engage in wars per the democratic peace theory, Rummel wrote that
"democracies themselves are responsible for some of this democide. Detailed estimates have yet to be made, but preliminarily work suggests that some 2,000,000 foreigners have been killed in cold blood by democracies."
Foreign policy and secret services of democratic regimes "may also carry on subversive activities in other states, support deadly coups, and actually encourage or support rebel or military forces that are involved in democidal activities. Such was done, for example, by the American CIA in the 1952 coup against Iran Prime Minister Mossadeq and the 1973 coup against Chile's democratically elected President Allende by General Pinochet. Then there was the secret support given the military in El Salvador and Guatemala although they were slaughtering thousands of presumed communist supporters, and that of the Contras in their war against the Sandinista government of Nicaragua in spite of their atrocities. Particularly reprehensible was the covert support given to the Generals in Indonesia as they murdered hundreds of thousands of communists and others after the alleged attempted communist coup in 1965, and the continued secret support given to General Agha Mohammed Yahya Khan of Pakistan even as he was involved in murdering over a million Bengalis in East Pakistan (now Bangladesh)."
According to Rummel, examples of democratic democide would include "those killed in indiscriminate or civilian targeted city bombing, as of Germany and Japan in World War II. It would include the large scale massacres of Filipinos during the bloody American colonization of the Philippines at the beginning of this century, deaths in British concentration camps in South Africa during the Boer War, civilian deaths due to starvation during the British blockade of Germany in and after World War I, the rape and murder of helpless Chinese in and around Peking in 1900, the atrocities committed by Americans in Vietnam, the murder of helpless Algerians during the Algerian War by the French, and the unnatural deaths of German prisoners of war in French and American POW camps after World War II."
See also
Anti-communist mass killings
Classicide
Cultural genocide
Environmental killings
Ethnic cleansing
Ethnic conflict
Ethnocide
Genocide of indigenous peoples
Genocides in history
List of ethnic cleansing campaigns
List of genocides
Policide
Population cleansing
Political cleansing of population
Pogrom
Lynching
Related topics
Communal violence
Comparison of Nazism and Stalinism
Crimes against humanity
Crimes against humanity under communist regimes
Criticism of communist party rule
Cultural conflict
Ethnic hatred
Ethnic violence
Extrajudicial killing
Extrajudicial punishment
Hate crime
Hate group
Hate media
Hate speech
Hate studies
Nuclear warfare
Police brutality
Religious violence
Sectarian violence
Social cleansing
Social murder
Terrorism
Vigilantism
Violence against LGBT people
War crime
References
Further reading
Bibliography of genocide studies
External links
Power Kills – the website of Rudolph Rummel
Crimes
Genocide
Human rights abuses
Murder
Political neologisms
Violence
Killings by type
Politicides
Political and cultural purges | Democide | [
"Biology"
] | 3,110 | [
"Behavior",
"Aggression",
"Human behavior",
"Violence"
] |
8,593 | https://en.wikipedia.org/wiki/Damascus%20steel | Damascus steel (Arabic: فولاذ دمشقي) refers to the high carbon crucible steel of the blades of historical swords forged using the wootz process in the Near East, characterized by distinctive patterns of banding and mottling reminiscent of flowing water, sometimes in a "ladder" or "rose" pattern. "Damascus steel" developed a high reputation for being tough, resistant to shattering, and capable of being honed to a sharp, resilient edge.
The term "Damascus steel" traces its roots to the medieval city of Damascus, Syria, perhaps as an early example of branding. However, there is now a general agreement that many of the swords, or at least the steel ingots from which they were forged, were imported from elsewhere. Originally, they came from either Southern India, where the steel-making techniques used were first developed, or from Khorasan, Iran.
The reputation and history of Damascus steel has given rise to many legends, such as the ability to cut through a rifle barrel or to cut a hair falling across the blade. Although many types of modern steel outperform ancient Damascus alloys, chemical reactions in the production process made the blades extraordinary for their time, as Damascus steel was very flexible and very hard at the same time.
The methods used to create medieval Damascus steel died out by the late 19th century. Modern steelmakers and metallurgists have studied it extensively, developing theories on how it was produced, and significant advances have been made. While the exact pattern of medieval Damascus steel has not been reproduced, many similar versions have been made, using similar techniques of lamination, banding, and patterning. These modern reproductions have also been called Damascus steel or "Modern Damascus".
Naming
The origin of the name "Damascus Steel" is contentious. Islamic scholars al-Kindi (full name Abu Ya'qub ibn Ishaq al-Kindi, circa 800 CE – 873 CE) and al-Biruni (full name Abu al-Rayhan Muhammad ibn Ahmad al-Biruni, circa 973 CE – 1048 CE) both wrote about swords and steel made for swords, based on their surface appearance, geographical location of production or forging, or the name of the smith, and each mentions "damascene" or "damascus" swords to some extent.
Drawing from al-Kindi and al-Biruni, there are three potential sources for the term "Damascus" in the context of steel:
Al-Kindi called swords produced and forged in Damascus as Damascene but these swords were not described as having a pattern in the steel.
Al-Biruni mentions a sword-smith called Damasqui who made swords of crucible steel.
The most common explanation is that steel is named after Damascus, the capital city of Syria and one of the largest cities in the ancient Levant. In Damascus, where many of these swords were sold, there is no evidence of local production of crucible steel, though there is evidence of imported steel being forged into swords in Damascus. The name could have been an early form of branding.
"Damascus steel" may either refer to swords made or sold in Damascus directly, or simply those with the distinctive surface patterns on the swords, in the same way that Damask fabrics (also named for Damascus), got their name.
History
Damascus blades were first manufactured in the Near East from ingots of wootz steel that were imported from Southern India (present-day Telangana Tamil Nadu and Kerala). Al Kindi states that crucible steel was also made in Khorasan known as Muharrar, in addition to steel that was imported. There was also domestic production of crucible steel outside of India, including Merv (Turkmenistan) and Yazd, Iran.
In addition to being made into blades in India (particularly Golconda) and Sri Lanka, wootz / ukku was exported as ingots to various production centers, including Khorasan, and Isfahan, where the steel was used to produce blades, as well as across the Middle East.
The Arabs introduced the wootz steel to Damascus, where a weapons industry thrived. From the 3rd century to the 17th century, steel ingots were being shipped to the Middle East from South India.
Reputation
The reputation and history of Damascus steel has given rise to many legends, such as the ability to cut through a rifle barrel or to cut a hair falling across the blade. Although many types of modern steel outperform ancient Damascus alloys, chemical reactions in the production process made the blades extraordinary for their time, as Damascus steel was very flexible and very hard at the same time.
Extant examples of patterned crucible steel swords were often tempered in such a way as to retain a bend after being flexed past their elastic limit.
Cultural references and misconceptions
The blade that Beowulf used to kill Grendel's mother in the story Beowulf was described in some Modern English translations as "damascened".
A misconception that the steel was hardened by thrusting it six times in the back and thighs of a slave originated in an article in the November 4, 1894 issue of the Chicago Tribune titled Tempering Damascus Blades. The note asserts that a certain "Prof. von Eulenspiegel" found a scroll "among the ruins of ancient Tyre"; "Eulenspiegel" is the name of a legendary prankster of medieval Germany.
Material and mechanical properties
Verhoeven, Peterson, and Baker completed mechanical characterization of a Damascus sword, performing tensile testing as well as hardness testing. They found that the Damascus steel was somewhat comparable to hot-rolled steel bars with 1.0 wt% carbon with regards to mechanical properties. The average yield strength of 740 MPa was higher than the hot-rolled steel yield strength of 550 MPa, and the average tensile strength of 1070 MPa was higher than the hot-rolled steel tensile strength of 965 MPa.
These results are likely due to the finer pearlite spacing in the Damascus steel, refining the microstructure. The elongation and reduction in area were also slightly higher than the hot-rolled steel averages. Rockwell hardness measurements of the Damascus steel ranged from 62 to 67. These mechanical properties were consistent with the expected properties from the constituent steels of the material, falling between the upper and lower bounds created by the original steels.
Folding
Another study investigated the properties of Damascus steel produced from 1075 steel and 15N20 steel, which have approximately equal amounts of carbon, but the 15N20 steel notably has 2 wt% nickel. The 1075 steel is known for high strength, but low toughness, with a pearlitic microstructure, and the 15N20 steel is known for high toughness with a ferritic microstructure. The mechanical properties of the resultant laminate Damascus steel were characterized, in samples with 54 folds in production as well as samples with 250 folds.
Charpy V-notch impact tests showed that the 54-fold samples had an impact toughness of 4.36 J/cm2, while the 250-fold samples had an impact toughness of 5.49 J/cm2. Tensile testing showed that yield strengths and elongations for both samples were similar, at around 475 MPa and 3.2% respectively. However, the maximum strength of the 54-fold samples was notably lower than that of the 250-fold samples (750 MPa vs. 860 MPa). This study showed that the folding process has a significant impact on the mechanical properties of the steel, with increasing toughness as fold numbers increase. This effect is likely due to the thinning and refinement of the microstructure, and to achieve optimal properties, the steel should be folded a few hundred times.
Further studies of Damascus steel created other steels showed similar results, confirming that increasing folds results in greater impact strength and toughness, and extending this finding to be consistent at higher temperatures. They also compare mechanical properties of the Damascus to the original materials, finding that the properties of the Damascus steel lie in between those of the two constituent steels, which is consistent with composite material properties.
Lamination and banding
The processing and design of the laminations and bands can have a significant effect on mechanical properties as well. Regardless of tempering temperature and the liquid the steel is quenched in, the impact strength of Damascus steel where the impact is perpendicular to the band orientation is significantly higher than the impact strength where the impact is parallel to the band orientation.
This is due to the failure and fracture mechanisms in Damascus steel, where cracks propagate fastest along the interfaces between the two constituent steels. When impact is directed parallel to the bands, cracks are able to propagate easily along the lamination interfaces. When impact is directed perpendicular to the bands, the lamination interfaces are effectively protected, deflecting the cracks and increasing the energy required for cracks to propagate through the material. Band orientation should be chosen to protect against deformation and increase toughness.
Metallurgical process
Identification of crucible "Damascus" steel based on metallurgical structures is difficult, as crucible steel cannot be reliably distinguished from other types of steel by just one criterion, so the following distinguishing characteristics of crucible steel must be taken into consideration:
The crucible steel was liquid, leading to a relatively homogeneous steel content with virtually no slag
The formation of dendrites is a typical characteristic
The segregation of elements into dendritic and interdendritic regions throughout the sample
By these definitions, modern recreations of crucible steel are consistent with historic examples.
Addition of carbon
During the smelting process to obtain wootz steel ingots, woody biomass and leaves are known to have been used as carburizing additives along with certain specific types of iron rich in microalloying elements. These ingots would then be further forged and worked into Damascus steel blades. Research now shows that carbon nanotubes can be derived from plant fibers, suggesting how the nanotubes were formed in the steel. Some experts expect to discover such nanotubes in more relics as they are analyzed more closely.
Wootz was also mentioned to have been made out of a co-fusion process using "shaburqan" (hard steel, likely white cast iron) and "narmahan" (soft steel) by Biruni, both of which were forms of either high- and low-carbon bloomery iron, or low-carbon bloom with cast iron. In such a crucible recipe, no added plant material is necessary to provide the required carbon content, and as such any nanowires of cementite or carbon nanotubes would not have been the result of plant fibers.
Modern research
A research team in Germany published a report in 2006 revealing nanowires and carbon nanotubes in a blade forged from Damascus steel, although John Verhoeven of Iowa State University in Ames suggests that the research team which reported nanowires in crucible steel was seeing cementite, which can itself exist as rods, so there might not be any carbon nanotubes in the rod-like structure.
Loss of the technique
Production of these patterned swords gradually declined, ceasing by around 1900, with the last account being from 1903 in Sri Lanka documented by Coomaraswamy. Some gunsmiths during the 18th and 19th century used the term "damascus steel" to describe their pattern-welded gun barrels, but they did not use crucible steel. Several modern theories have ventured to explain this decline:
Due to the distance of trade for this steel, a sufficiently lengthy disruption of the trade routes could have ended the production of Damascus steel and eventually led to the loss of the technique.
The need for key trace impurities of carbide formers such as tungsten, vanadium or manganese within the materials needed for the production of the steel may be absent if this material was acquired from different production regions or smelted from ores lacking these key trace elements.
The technique for controlled thermal cycling after the initial forging at a specific temperature could also have been lost, thereby preventing the final damask pattern in the steel from occurring.
The disruption of mining and steel manufacture by the British Raj in the form of production taxes and export bans may have also contributed to a loss of knowledge of key ore sources or key techniques.
Modern conjecture
The discovery of alleged carbon nanotubes in the Damascus steel's composition, if true, could support the hypothesis that wootz production was halted due to a loss of ore sources or technical knowledge, since the precipitation of carbon nanotubes probably resulted from a specific process that may be difficult to replicate should the production technique or raw materials used be significantly altered. The claim that carbon nanowires were found has not been confirmed by further studies, and there is contention among academics about whether the nanowires observed are actually stretched rafts or rods formed out of cementite spheroids.
Modern attempts to duplicate the metal have not always been entirely successful due to differences in raw materials and manufacturing techniques, but several individuals in modern times have successfully produced pattern forming hypereutectoid crucible steel with visible carbide banding on the surface, consistent with original Damascus Steel.
Modern reproduction
Recreating Damascus steel has been attempted by archaeologists using experimental archaeology. Many have attempted to discover or reverse-engineer the process by which it was made.
Moran: billet welding
Since the well-known technique of pattern welding—the forge-welding of a blade from several differing pieces—produced surface patterns similar to those found on Damascus blades, some modern blacksmiths were erroneously led to believe that the original Damascus blades were made using this technique. However today, the difference between wootz steel and pattern welding is fully documented and well understood. Pattern-welded steel has been referred to as "Damascus steel" since 1973 when Bladesmith William F. Moran unveiled his "Damascus knives" at the Knifemakers' Guild Show.
This "Modern Damascus" is made from several types of steel and iron slices welded together to form a billet, and currently, the term "Damascus" (although technically incorrect) is widely accepted to describe modern pattern-welded steel blades in the trade. The patterns vary depending on how the smith works the billet. The billet is drawn out and folded until the desired number of layers are formed. To attain a Master Smith rating with the American Bladesmith Society that Moran founded, the smith must forge a Damascus blade with a minimum of 300 layers.
Verhoeven and Pendray: crucible
J. D. Verhoeven and A. H. Pendray published an article on their attempts to reproduce the elemental, structural, and visual characteristics of Damascus steel. They started with a cake of steel that matched the properties of the original wootz steel from India, which also matched a number of original Damascus swords that Verhoeven and Pendray had access to.
The wootz was in a soft, annealed state, with a grain structure and beads of pure iron carbide in cementite spheroids, which resulted from its hypereutectoid state. Verhoeven and Pendray had already determined that the grains on the surface of the steel were grains of iron carbide—their goal was to reproduce the iron carbide patterns they saw in the Damascus blades from the grains in the wootz.
Although such material could be worked at low temperatures to produce the striated Damascene pattern of intermixed ferrite/pearlite and cementite spheroid bands in a manner identical to pattern-welded Damascus steel, any heat treatment sufficient to dissolve the carbides was thought to permanently destroy the pattern. However, Verhoeven and Pendray discovered that in samples of true Damascus steel, the Damascene pattern could be recovered by thermally cycling and thermally manipulating the steel at a moderate temperature.
They found that certain carbide forming elements, one of which was vanadium, did not disperse until the steel reached higher temperatures than those needed to dissolve the carbides. Therefore, a high heat treatment could remove the visual evidence of patterning associated with carbides but did not remove the underlying patterning of the carbide forming elements.
A subsequent lower-temperature heat treatment, at a temperature at which the carbides were again stable, could recover the structure by the binding of carbon by those elements and causing the segregation of cementite spheroids to those locations.
Thermal cycling after forging allows for the aggregation of carbon onto these carbide formers, as carbon migrates much more rapidly than the carbide formers. Progressive thermal cycling leads to the coarsening of the cementite spheroids via Ostwald ripening.
Anosov, Wadsworth and Sherby: bulat
In Russia, chronicles record the use of a material known as bulat steel to make highly valued weapons, including swords, knives, and axes. Tsar Michael of Russia reportedly had a bulat helmet made for him in 1621. The exact origin or the manufacturing process of the bulat is unknown, but it was likely imported to Russia via Persia and Turkestan, and it was similar and possibly the same as Damascus steel. Pavel Petrovich Anosov successfully reproduced the process in the mid-19th century. Wadsworth and Sherby also researched the reproduction of bulat steel and published their results in 1980.
Additional research
A team of researchers based at the Technical University of Dresden that used x-rays and electron microscopy to examine Damascus steel discovered the presence of cementite nanowires and carbon nanotubes. Peter Paufler, a member of the Dresden team, says that these nanostructures are a result of the forging process.
Sanderson proposes that the process of forging and annealing accounts for the nano-scale structures.
German researchers have investigated the possibility of manufacturing high-strength Damascus steel through laser additive manufacturing techniques as opposed to the traditional folding and forging. The resulting samples exhibited superior mechanical properties to ancient Damascus steels, with a tensile strength of 1300 MPa and 10% elongation.
In gun making
Prior to the early 20th century, all shotgun barrels were forged by heating narrow strips of iron and steel and shaping them around a mandrel. This process was referred to as "laminating" or "Damascus". These types of barrels earned a reputation for weakness and were never meant to be used with modern smokeless powder, or any kind of moderately powerful explosive. Because of the resemblance to Damascus steel, higher-end barrels were made by Belgian and British gun makers. These barrels are proof marked and meant to be used with light pressure loads. Current gun manufacturers make slide assemblies and small parts such as triggers and safeties for Colt M1911 pistols from powdered Swedish steel resulting in a swirling two-toned effect; these parts are often referred to as "Stainless Damascus".
See also
Toledo steel
Crucible steel
Wootz steel
Noric steel
Bulat steel
Tamahagane steel
Mokume-gane
Laminated steel blade
References
External links
"Damascene Technique in Metal Working"
John Verhoeven: Mystery of Damascus Steel Swords Unveiled
Steels
Steelmaking
History of Damascus
Metalworking
Lost inventions
Arab inventions | Damascus steel | [
"Chemistry"
] | 3,945 | [
"Steels",
"Metallurgical processes",
"Steelmaking",
"Alloys"
] |
8,603 | https://en.wikipedia.org/wiki/Diffraction | Diffraction is the deviation of waves from straight-line propagation without any change in their energy due to an obstacle or through an aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Diffraction is the same physical effect as interference, but interference is typically applied to superposition of a few waves and the term diffraction is used when many waves are superposed.
Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple closely spaced openings, a complex pattern of varying intensity can result.
These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels).
History
The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves.
In 1818, supporters of the corpuscular theory of light proposed that the Paris Academy prize question address diffraction, expecting to see the wave theory defeated. However,
Augustin-Jean Fresnel took the prize with his new theory wave propagation, combining the ideas of Christiaan Huygens with Young's interference concept. Siméon Denis Poisson challenged the Fresnel theory by showing that it predicted light in the shadow behind circular obstruction; Dominique-François-Jean Arago proceeded to demonstrate experimentally that such light is visible, confirming Fresnel's diffraction model.
Mechanism
In classical physics diffraction arises because of how waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima.
In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance, and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism).
There are various analytical models for photons which allow the diffracted field to be calculated, including the Kirchhoff diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods. In many cases it is assumed that there is only one scattering event, what is called kinematical diffraction, with an Ewald's sphere construction used to represent that there is no change in energy during the diffraction process. For matter waves a similar but slightly different approach is used based upon a relativistically corrected form of the Schrödinger equation, as first detailed by Hans Bethe. The Fraunhofer and Fresnel limits exist for these as well, although they correspond more to approximations for the matter wave Green's function (propagator) for the Schrödinger equation. More common is full multiple scattering models particular in electron diffraction; in some cases similar dynamical diffraction models are also used for X-rays.
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes, we will have to take into account the full three-dimensional nature of the problem.
Examples
The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc.
This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example.
Diffraction in the atmosphere by small particles can cause a corona - a bright disc and rings around a bright light source like the sun or the moon. At the opposite point one may also observe glory - bright rings around the shadow of the observer. In contrast to the corona, glory requires the particles to be transparent spheres (like fog droplets), since the backscattering of the light that forms the glory involves refraction and internal reflection within the droplet.
A shadow of a solid object, using light from a compact source, shows small fringes near its edges.
Diffraction spikes are diffraction patterns caused due to non-circular aperture in camera or support struts in telescope; In normal vision, diffraction through eyelashes may produce such spikes.
The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree.
Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope.
Other examples of diffraction are considered below.
Single-slit diffraction
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle.
An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately so that the minimum intensity occurs at an angle given by
where is the width of the slit, is the angle of incidence at which the minimum intensity occurs, and is the wavelength of the light.
A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles given by
where is an integer other than zero.
There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction equation as
where is the intensity at a given angle, is the intensity at the central maximum which is also a normalization factor of the intensity profile that can be determined by an integration from to and conservation of energy, and which is the unnormalized sinc function.
This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit.
From the intensity profile above, if the intensity will have little dependency on hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If only would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics.
When the incident angle of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes:
The choice of plus/minus sign depends on the definition of the incident angle
Diffraction grating
A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation
where is the angle at which the light is incident, is the separation of grating elements, and is an integer which can be positive or negative.
The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns.
The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
Circular aperture
The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy disk. The variation in intensity with angle is given by
where is the radius of the circular aperture, is equal to and is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams.
General aperture
The wave that emerges from a point source has amplitude at location that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation),
where is the 3-dimensional delta function. The delta function has only radial dependence, so the Laplace operator (a.k.a. scalar Laplacian) in the spherical coordinate system simplifies to
(See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention ) is
This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector and the field point is located at the point , then we may represent the scalar Green's function (for arbitrary source location) as
Therefore, if an electric field is incident on the aperture, the field produced by this aperture distribution is given by the surface integral
where the source point in the aperture is given by the vector
In the far field, wherein the parallel rays approximation can be employed, the Green's function,
simplifies to
as can be seen in the adjacent figure.
The expression for the far-zone (Fraunhofer region) field becomes
Now, since
and
the expression for the Fraunhofer region field from a planar aperture now becomes
Letting
and
the Fraunhofer region field of the planar aperture assumes the form of a Fourier transform
In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics).
Propagation of a laser beam
The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect.
When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal.
Diffraction-limited imaging
The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is
where is the wavelength of the light and is the f-number (focal length divided by aperture diameter ) of the imaging optics; this is strictly accurate for (paraxial case). In object space, the corresponding angular resolution is
where is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror).
Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other.
Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution.
Speckle patterns
The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly.
Babinet's principle
Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit.
"Knife edge"
The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building.
The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle.
Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). In 1974, Prabhakar Pathak and Robert Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD).
Patterns
Several qualitative observations can be made of diffraction in general:
The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.)
The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next.
Matter wave diffraction
According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a non-relativistic particle is the de Broglie wavelength
where is the Planck constant and is the momentum of the particle (mass × velocity for slow-moving particles). For example, a sodium atom traveling at about 300 m/s would have a de Broglie wavelength of about 50 picometres.
Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic structure of solids, molecules and proteins.
Bragg diffraction
Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction.
It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes.
The condition of constructive interference is given by Bragg's law:
where is the wavelength, is the distance between crystal planes, is the angle of the diffracted wave, and is an integer known as the order of the diffracted beam.
Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes , allowing one to deduce the crystal structure.
For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers.
Coherence
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent.
The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition.
If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns.
In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle.
Applications
Diffraction before destruction
A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained.
See also
Angle-sensitive pixel
Atmospheric diffraction
Brocken spectre
Cloud iridescence
Coherent diffraction imaging
Diffraction from slits
Diffraction spike
Diffraction vs. interference
Diffractive solar sail
Diffractometer
Dynamical theory of diffraction
Electron diffraction
Fraunhofer diffraction
Fresnel imager
Fresnel number
Fresnel zone
Point spread function
Powder diffraction
Quasioptics
Refraction
Reflection
Schaefer–Bergmann diffraction
Thinned-array curse
X-ray diffraction
References
External links
The Feynman Lectures on Physics Vol. I Ch. 30: Diffraction
Using a cd as a diffraction grating at YouTube
Physical phenomena | Diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,438 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Crystallography",
"Diffraction",
"Spectroscopy"
] |
8,608 | https://en.wikipedia.org/wiki/Dolmen | A dolmen () or portal tomb is a type of single-chamber megalithic tomb, usually consisting of two or more upright megaliths supporting a large flat horizontal capstone or "table". Most date from the Late Neolithic period (40003000 BCE) and were sometimes covered with earth or smaller stones to form a tumulus (burial mound). Small pad-stones may be wedged between the cap and supporting stones to achieve a level appearance. In many instances, the covering has eroded away, leaving only the stone "skeleton".
In Sumba (Indonesia), dolmens are still commonly built (about 100 dolmens each year) for collective graves according to lineage. The traditional village of Wainyapu has some 1,400 dolmens.
Etymology
Celtic
The word dolmen entered archaeology when Théophile Corret de la Tour d'Auvergne used it to describe megalithic tombs in his (1796) using the spelling dolmin (the current spelling was introduced about a decade later and had become standard in French by about 1885). The Oxford English Dictionary (OED) does not mention dolmin in English and gives its first citation for dolmen from a book on Brittany in 1859, describing the word as "The French term, used by some English authors, for a cromlech ...". The name was supposedly derived from a Breton language term meaning 'stone table' but doubt has been cast on this, and the OED describes its origin as "Modern French". A book on Cornish antiquities from 1754 said that the current term in the Cornish language for a cromlech was ('hole of stone') and the OED says that "There is reason to think that this was the term inexactly reproduced by Latour d'Auvergne [sic] as dolmen, and misapplied by him and succeeding French archaeologists to the cromlech". Nonetheless it has now replaced cromlech as the usual English term in archaeology, when the more technical and descriptive alternatives are not used. The later Cornish term was quoit – an English-language word for an object with a hole through the middle preserving the original Cornish language term of – the name of another dolmen-like monument is in fact Mên-an-Tol 'stone with hole' (Standard Written Form: Men An Toll.)
In Irish Gaelic, dolmens are called .
Germanic
Dolmens are known by a variety of names in other languages, including Galician and , , , Afrikaans and , , Abkhaz: , Adyghe:
Danish and , , , and . Granja is used in Portugal, Galicia, and some parts of Spain. The rarer forms anta and ganda also appear. In Catalan-speaking areas, they are known simply as , but also by a variety of folk names, including ('cave'), ('crate' or 'coffin'), ('table'), ('chest'), ('hut'), ('hut'), ('slab'), ('pallet slab'), ('rock') or ('stone'), usually combined with a second part such as ('of the Arab'), ('of the Moor/s'), ('of the thief'), ('of the devil'), ('of Roland'). In the Basque Country, they are attributed to the jentilak, a race of giants.
The etymology of the and – with / meaning 'giant' – all evoke the image of giants buried (// = 'bed/grave') there. Of other Celtic languages, Welsh was borrowed into English and quoit is commonly used in English in Cornwall.
Western Europe
The oldest dolmens found in Western Europe are roughly 7,000 years old. Although archaeological evidence is unclear regarding their creators, the structures are often associated with tombs or burial chambers. Human remains, sometimes accompanied by artefacts, have been found in proximity of dolmen sites. While the remains can by analyzed with radiocarbon dating, it is difficult to confirm whether said remains coincide with the date the stones were originally set in place.
Early in the 20th century, before the advent of scientific dating, Harold Peake proposed that the dolmens of western Europe were evidence of cultural diffusion from the eastern Mediterranean. This "prospector theory" surmised that Aegean-origin prospectors had moved westward in search of metal ores, starting before 2200 BCE, and had carried with them the concept of megalithic architecture.
Middle East
Dolmens can be found in the Levant, some along the Jordan Rift Valley (Upper Galilee in Israel, the Golan Heights, Jordan, Lebanon, Syria, and southeast Turkey.
Dolmens in the Levant belong to a different, unrelated tradition to that of Europe, although they are often treated "as part of a trans-regional phenomenon that spanned the Taurus Mountains to the Arabian Peninsula." In the Levant, they are of Early Bronze rather than Late Neolithic age. They are mostly found along the Jordan Rift Valley's eastern escarpment, and in the hills of the Galilee, in clusters near Early Bronze I proto-urban settlements (3700–3000 BCE), additionally restricted by geology to areas allowing the quarrying of slabs of megalithic size. In the Levant, geological constraints led to a local burial tradition with a variety of tomb forms, dolmens being one of them.
Korea
Dolmens were built in Korea from the Bronze Age to the early Iron Age, with about 40,000 to be found throughout the peninsula. In 2000, the dolmen groups of Jukrim-ri and Dosan-ri in Gochang, Hyosan-ri and Daesin-ri in Hwasun, and Bujeong-ri, Samgeori and Osang-ri in Ganghwa gained World Cultural Heritage status. (See Gochang, Hwasun and Ganghwa Dolmen Sites.)
They are mainly distributed along the West Sea coastal area and on large rivers from the Liaoning region of China (the Liaodong Peninsula) to Jeollanam-do. In North Korea, they are concentrated around the Taedong and Jaeryeong Rivers. In South Korea, they are found in dense concentrations in river basins, such as the Han and Nakdong Rivers, and in the west coast area (Boryeong in South Chungcheong Province, Buan in North Jeolla Province, and Jeollanam-do. They are mainly found on sedimentary plains, where they are grouped in rows parallel to the direction of the river or stream. Those found in hilly areas are grouped in the direction of the hill.
India
Marayoor, Kerala
Also called Muniyaras, these dolmens belong to the Iron Age. These dolmenoids were burial chambers made of four stones placed on edge and covered by a fifth stone called the cap stone. Some of these Dolmenoids contain several burial chambers, while others have a quadrangle scooped out in laterite and lined on the sides with granite slabs. These are also covered with cap stones. Dozens of dolmens around the area of old Siva temple (Thenkasinathan Temple) at Kovilkadavu on the banks of the River Pambar and also around the area called Pius nagar, and rock paintings on the south-western slope of the plateau overlooking the river have attracted visitors.
Apart from the dolmens of Stone Age, several dolmens of Iron Age exist in this region especially on the left side of river Pambar as is evident from the usage of neatly dressed granite slabs for the dolmens. At least one of them has a perfectly circular hole of 28 cm diameter inside the underground chamber. This region has several types of dolmens. Large number of them are overground with about 70–90 cm height. Another type has a height 140–170 cm. There is an overground dolmen with double length up to 350 cm. Fragments of burial urns are also available in the region near the dolmens. This indicates that the dolmens with 70–90 cm height were used for burial of the remains of people of high social status. Burial urns were used for the burial of the remains of commoners. The dolmens with raised roofs might have been used for habitation of people. Why some people lived in the cemeteries has not been satisfactorily explained.
Types
See also
Irish megalithic tombs
List of dolmens
List of megalithic sites
Megalithic art
Neolithic Europe
Nordic megalith architecture
References
Works cited
Further reading
External links
World heritage site of dolmen in Korea
The Megalith Map
The Megalithic Portal and Megalith Map
on UNESCO's World Heritage List.
Jersey Heritage Trust
Dolmens of Russia
Dolmens. Part 2. How and for which purpose were they built? Hypotheses
Burial monuments and structures
Megalithic monuments
Types of monuments and memorials
Stone monuments and memorials
Stones
Death customs
Megalithic monuments in the Middle East
Stone Age Europe
4th-millennium BC architecture | Dolmen | [
"Physics"
] | 1,884 | [
"Stones",
"Physical objects",
"Matter"
] |
8,612 | https://en.wikipedia.org/wiki/Declination | In astronomy, declination (abbreviated dec; symbol δ) is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. The declination angle is measured north (positive) or south (negative) of the celestial equator, along the hour circle passing through the point in question.
The root of the word declination (Latin, declinatio) means "a bending away" or "a bending down". It comes from the same root as the words incline ("bend forward") and recline ("bend backward").
In some 18th and 19th century astronomical texts, declination is given as North Pole Distance (N.P.D.), which is equivalent to 90 – (declination). For instance an object marked as declination −5 would have an N.P.D. of 95, and a declination of −90 (the south celestial pole) would have an N.P.D. of 180.
Explanation
Declination in astronomy is comparable to geographic latitude, projected onto the celestial sphere, and right ascension is likewise comparable to longitude.
Points north of the celestial equator have positive declinations, while those south have negative declinations. Any units of angular measure can be used for declination, but it is customarily measured in the degrees (°), minutes (′), and seconds (″) of sexagesimal measure, with 90° equivalent to a quarter circle. Declinations with magnitudes greater than 90° do not occur, because the poles are the northernmost and southernmost points of the celestial sphere.
An object at the
celestial equator has a declination of 0°
north celestial pole has a declination of +90°
south celestial pole has a declination of −90°
The sign is customarily included whether positive or negative.
Effects of precession
The Earth's axis rotates slowly westward about the poles of the ecliptic, completing one circuit in about 26,000 years. This effect, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including declination) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch.
The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian Epochs B1875.0, B1900.0, and B1950.0.
Stars
A star's direction remains nearly fixed due to its vast distance, but its right ascension and declination do change gradually due to precession of the equinoxes and proper motion, and cyclically due to annual parallax. The declinations of Solar System objects change very rapidly compared to those of stars, due to orbital motion and close proximity.
As seen from locations in the Earth's Northern Hemisphere, celestial objects with declinations greater than 90° − (where = observer's latitude) appear to circle daily around the celestial pole without dipping below the horizon, and are therefore called circumpolar stars. This similarly occurs in the Southern Hemisphere for objects with declinations less (i.e. more negative) than −90° − (where is always a negative number for southern latitudes). An extreme example is the pole star which has a declination near to +90°, so is circumpolar as seen from anywhere in the Northern Hemisphere except very close to the equator.
Circumpolar stars never dip below the horizon. Conversely, there are other stars that never rise above the horizon, as seen from any given point on the Earth's surface (except extremely close to the equator. Upon flat terrain, the distance has to be within approximately 2 km, although this varies based upon the observer's altitude and surrounding terrain). Generally, if a star whose declination is is circumpolar for some observer (where is either positive or negative), then a star whose declination is − never rises above the horizon, as seen by the same observer. (This neglects the effect of atmospheric refraction.) Likewise, if a star is circumpolar for an observer at latitude , then it never rises above the horizon as seen by an observer at latitude −.
Neglecting atmospheric refraction, for an observer at the equator, declination is always 0° at east and west points of the horizon. At the north point, it is 90° − ||, and at the south point, −90° + ||. From the poles, declination is uniform around the entire horizon, approximately 0°.
Non-circumpolar stars are visible only during certain days or seasons of the year.
Sun
The Sun's declination varies with the seasons. As seen from arctic or antarctic latitudes, the Sun is circumpolar near the local summer solstice, leading to the phenomenon of it being above the horizon at midnight, which is called midnight sun. Likewise, near the local winter solstice, the Sun remains below the horizon all day, which is called polar night.
Relation to latitude
When an object is directly overhead its declination is almost always within 0.01 degrees of the observer's latitude; it would be exactly equal except for two complications.
The first complication applies to all celestial objects: the object's declination equals the observer's astronomical latitude, but the term "latitude" ordinarily means geodetic latitude, which is the latitude on maps and GPS devices. In the continental United States and surrounding area, the difference (the vertical deflection) is typically a few arcseconds (1 arcsecond = of a degree) but can be as great as 41 arcseconds.
The second complication is that, assuming no deflection of the vertical, "overhead" means perpendicular to the ellipsoid at observer's location, but the perpendicular line does not pass through the center of the Earth; almanacs provide declinations measured at the center of the Earth. (An ellipsoid is an approximation to sea level that is mathematically manageable).
See also
Celestial coordinate system
Ecliptic
Equatorial coordinate system
Geographic coordinate system
Lunar standstill
Position of the Sun
Right ascension
Setting circles
Notes and references
External links
MEASURING THE SKY A Quick Guide to the Celestial Sphere James B. Kaler, University of Illinois
Celestial Equatorial Coordinate System University of Nebraska-Lincoln
Celestial Equatorial Coordinate Explorers University of Nebraska-Lincoln
Sidereal pointer (Torquetum) – to determine RA/DEC.
Astronomical coordinate systems
Angle
Technical factors of astrology | Declination | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,441 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical quantities",
"Astronomical coordinate systems",
"Coordinate systems",
"Wikipedia categories named after physical quantities",
"Angle"
] |
8,640 | https://en.wikipedia.org/wiki/Database%20normalization | Database normalization is the process of structuring a relational database accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.
Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Objectives
A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded in first-order logic. An example of such a language is SQL, though it is one that Codd regarded as seriously flawed.
The objectives of normalization beyond 1NF (first normal form) were stated by Codd as:
When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized:
Insertion anomaly There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to null.
Update anomaly The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an "Employees' Skills" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful – the employee's address is updated on some records but not others – then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee's address is.
Deletion anomaly Under certain circumstances, the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The "Faculty and Their Courses" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.
Minimize redesign when extending the database structure
A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected.
Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.
Normal forms
Codd introduced the concept of normalization and what is now known as the first normal form (1NF) in 1970. Codd went on to define the second normal form (2NF) and third normal form (3NF) in 1971, and Codd and Raymond F. Boyce defined the Boyce–Codd normal form (BCNF) in 1974.
Ronald Fagin introduced the fourth normal form (4NF) in 1977 and the fifth normal form (5NF) in 1979. Christopher J. Date introduced the sixth normal form (6NF) in 2003.
Informally, a relational database relation is often described as "normalized" if it meets third normal form. Most 3NF relations are free of insertion, updation, and deletion anomalies.
The normal forms (from least normalized to most normalized) are:
Example of a step-by-step normalization
Normalization is a database design technique, which is used to design a relational database table up to higher normal form. The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.
That means that, having data in unnormalized form (the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to first normal form, the second step would be to ensure second normal form is satisfied, and so forth in order mentioned above, until the data conform to sixth normal form.
However, normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice.
The data in the following example were intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized.
Initial data
Let a database table exist with the following structure:
For this example it is assumed that each book has only one author.
A table that conforms to the relational model has a primary key which uniquely identifies a row. In our example, the primary key is a composite key of {Title, Format} (indicated by the underlining):
Satisfying 1NF
In the first normal form each field contains a single value. A field may not contain a set of values or a nested record. Subject contains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separate Subject table:
Instead of one table in unnormalized form, there are now two tables conforming to the 1NF.
Satisfying 2NF
Recall that the Book table below has a composite key of {Title, Format}, which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design the key is not finalized as the primary key, so it is called a candidate key. Consider the following table:
All of the attributes that are not part of the candidate key depend on Title, but only Price also depends on Format. To conform to 2NF and remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it.
To normalize this table, make {Title} a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove Price into a separate table so that its dependency on Format can be preserved:
Now, both the Book and Price tables conform to 2NF.
Satisfying 3NF
The Book table still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). Similar violations exist for publisher ({Publisher Country} is dependent on {Publisher}, which is dependent on {Title}) and for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, the Book table is not in 3NF. To resolve this, we can place {Author Nationality}, {Publisher Country}, and {Genre Name} in their own respective tables, thereby eliminating the transitive functional dependencies:
Satisfying EKNF
The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended "to capture the salient qualities of both 3NF and BCNF" while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example.
Satisfying 4NF
Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations:
As this table structure consists of a compound primary key, it doesn't contain any non-key attributes and it's already in BCNF (and therefore also satisfies all the previous normal forms). However, assuming that all available books are offered in each area, the Title is not unambiguously bound to a certain Location and therefore the table doesn't satisfy 4NF.
That means that, to satisfy the fourth normal form, this table needs to be decomposed as well:
Now, every record is unambiguously identified by a superkey, therefore 4NF is satisfied.
Satisfying ETNF
Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint:
If a certain supplier supplies a certain title
and the title is supplied to the franchisee
and the franchisee is being supplied by the supplier,
then the supplier supplies the title to the franchisee.
This table is in 4NF, but the Supplier ID is equal to the join of its projections: {{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}. No component of that join dependency is a superkey (the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed:
The decomposition produces ETNF compliance.
Satisfying 5NF
To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let's examine if it satisfies 5NF:
Decomposing this table lowers redundancies, resulting in the following two tables:
The query joining these tables would return the following data:
The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables:
What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose the Franchisee - Book - Location without data loss, therefore the table already satisfies 5NF.
C.J. Date has argued that only a database in 5NF is truly "normalized".
Satisfying DKNF
Let's have a look at the Book table from previous examples and see if it satisfies the domain-key normal form:
Logically, Thickness is determined by number of pages. That means it depends on Pages which is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick".
This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity.
In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violate DKNF.
To solve this, a table holding enumeration that defines the Thickness is created, and that column is removed from the original table:
That way, the domain integrity violation has been eliminated, and the table is in DKNF.
Satisfying 6NF
A simple and intuitive definition of the sixth normal form is that "a table is in 6NF when the row contains the Primary Key, and at most one other attribute".
That means, for example, the Publisher table designed while creating the 1NF:
needs to be further decomposed into two tables:
The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve online transaction processing (OLTP) needs, 6NF should not be used.
However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.)
In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as Sybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table.
See also
Denormalization
Database refactoring
Lossless join decomposition
Notes and references
Further reading
Date, C. J. (1999), An Introduction to Database Systems (8th ed.). Addison-Wesley Longman. .
Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory, Communications of the ACM, vol. 26, pp. 120–125
H.-J. Schek, P. Pistor Data Structures for an Integrated Data Base Management and Information Retrieval System
External links
Database Normalization Basics by Mike Chapple (About.com)
Database Normalization Intro , Part 2
An Introduction to Database Normalization by Mike Hillyer.
A tutorial on the first 3 normal forms by Fred Coulson
Description of the database normalization basics by Microsoft
Normalization in DBMS by Chaitanya (beginnersbook.com)
A Step-by-Step Guide to Database Normalization
ETNF – Essential tuple normal form
Database constraints
Data management
Data modeling
Relational algebra
Database management systems | Database normalization | [
"Mathematics",
"Technology",
"Engineering"
] | 2,960 | [
"Data management",
"Data modeling",
"Fields of abstract algebra",
"Data engineering",
"Mathematical relations",
"Data",
"Relational algebra"
] |
8,643 | https://en.wikipedia.org/wiki/Molecular%20diffusion | Molecular diffusion, often simply called diffusion, is the thermal motion of all (liquid or gas) particles at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size (mass) of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S1 and S2 at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ1>μ2 (μ is Chemical potential) an energy flow will occur from S1 to S2, because nature always prefers low energy and maximum entropy.
Molecular diffusion is typically described mathematically using Fick's laws of diffusion.
Applications
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
Sintering to produce solid materials (powder metallurgy, production of ceramics)
Chemical reactor design
Catalyst design in chemical industry
Steel can be diffused (e.g., with carbon or nitrogen) to modify its properties
Doping during production of semiconductors.
Significance
Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one.
Biology
In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis.
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
Tracer, self- and chemical diffusion
Fundamentally, two types of diffusion are distinguished:
Tracer diffusion and Self-diffusion, which is a spontaneous mixing of molecules taking place in the absence of concentration (or chemical potential) gradient. This type of diffusion can be followed using isotopic tracers, hence the name. The tracer diffusion is usually assumed to be identical to self-diffusion (assuming no significant isotopic effect). This diffusion can take place under equilibrium. An excellent method for the measurement of self-diffusion coefficients is pulsed field gradient (PFG) NMR, where no isotopic tracers are needed. In a so-called NMR spin echo experiment this technique uses the nuclear spin precession phase, allowing to distinguish chemically and physically completely identical species e.g. in the liquid phase, as for example water molecules within liquid water. The self-diffusion coefficient of water has been experimentally determined with high accuracy and thus serves often as a reference value for measurements on other liquids. The self-diffusion coefficient of neat water is: 2.299·10−9 m2·s−1 at 25 °C and 1.261·10−9 m2·s−1 at 4 °C.
Chemical diffusion occurs in a presence of concentration (or chemical potential) gradient and it results in net transport of mass. This is the process described by the diffusion equation. This diffusion is always a non-equilibrium process, increases the system entropy, and brings the system closer to equilibrium.
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
Non-equilibrium system
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
Concentration dependent "collective" diffusion
Collective diffusion is the diffusion of a large number of particles, most often within a solvent.
Contrary to brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient D the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
the diffusion coefficient D in the particle diffusion equation becomes dependent of concentration. For an attractive interaction between particles, the diffusion coefficient tends to decrease as concentration increases. For a repulsive interaction between particles, the diffusion coefficient tends to increase as concentration increases.
In the case of an attractive interaction between particles, particles exhibit a tendency to coalesce and form clusters if their concentration lies above a certain threshold. This is equivalent to a precipitation chemical reaction (and if the considered diffusing particles are chemical molecules in solution, then it is a precipitation).
Molecular diffusion of gases
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A.
Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as -dCA/dx, where CA is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is -dCB/dx. The rate of diffusion of A, NA, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's law
(only applicable for no bulk motion)
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion NA is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dCA/dx or dCB/dx change with time, equimolecular counterdiffusion is considered first.
Equimolecular counterdiffusion
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is .
The partial pressure of A changes by dPA over the distance dx. Similarly, the partial pressure of B changes dPB. As there is no difference in total pressure across the element (no bulk flow), we have
.
For an ideal gas the partial pressure is related to the molar concentration by the relation
where nA is the number of moles of gas A in a volume V. As the molar concentration CA is equal to nA/ V therefore
Consequently, for gas A,
where DAB is the diffusivity of A in B. Similarly,
Considering that dPA/dx=-dPB/dx, it therefore proves that DAB=DBA=D. If the partial pressure of A at x1 is PA1 and x2 is PA2, integration of above equation,
A similar equation may be derived for the counterdiffusion of gas B.
See also
References
External links
Some pictures that display diffusion and osmosis
An animation describing diffusion.
A tutorial on the theory behind and solution of the Diffusion Equation.
NetLogo Simulation Model for Educational Use (Java Applet)
Short movie on brownian motion (includes calculation of the diffusion coefficient)
A basic introduction to the classical theory of volume diffusion (with figures and animations)
Diffusion on the nanoscale (with figures and animations)
Transport phenomena
Diffusion
Underwater diving physics | Molecular diffusion | [
"Physics",
"Chemistry",
"Engineering"
] | 2,030 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Diffusion",
"Underwater diving physics",
"Chemical engineering"
] |
8,651 | https://en.wikipedia.org/wiki/Dark%20matter | In astronomy, dark matter is an invisible and hypothetical form of matter that does not interact with light or other electromagnetic radiation. Dark matter is implied by gravitational effects which cannot be explained by general relativity unless more matter is present than can be observed. Such effects occur in the context of formation and evolution of galaxies, gravitational lensing, the observable universe's current structure, mass position in galactic collisions, the motion of galaxies within galaxy clusters, and cosmic microwave background anisotropies.
In the standard Lambda-CDM model of cosmology, the mass–energy content of the universe is 5% ordinary matter, 26.8% dark matter, and 68.2% a form of energy known as dark energy. Thus, dark matter constitutes 85% of the total mass, while dark energy and dark matter constitute 95% of the total mass–energy content.
Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes.
Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles.
Although the astrophysics community generally accepts the existence of dark matter, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required.
History
Early history
The hypothesis of dark matter has an elaborate history.
Wm. Thomson, Lord Kelvin, discussed the potential number of stars around the Sun in the appendices of a book based on a series of lectures given in 1884 in Baltimore. He inferred their density using the observed velocity dispersion of the stars near the Sun, assuming that the Sun was 20–100 million years old. He posed what would happen if there were a thousand million stars within 1 kiloparsec of the Sun (at which distance their parallax would be 1 milli-arcsecond). Kelvin concluded
Many of our supposed thousand million stars – perhaps a great majority of them – may be dark bodies.
In 1906, Poincaré used the French term [] ("dark matter") in discussing Kelvin's work. He found that the amount of dark matter would need to be less than that of visible matter, incorrectly, it turns out.
The second to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922.
A publication from 1930 by Swedish astronomer Knut Lundmark points to him being the first to realise that the universe must contain much more mass than can be observed. Dutch radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be incorrect.
In 1933, Swiss astrophysicist Fritz Zwicky studied galaxy clusters while working at Cal Tech and made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitational attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that most of the gravitational matter present was dark. However unlike modern theories, Zwicky considered "dark matter" to be non-luminous ordinary matter.
Further indications of mass-to-light ratio anomalies came from measurements of galaxy rotation curves. In 1939, H.W. Babcock reported the rotation curve for the Andromeda nebula (now called the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral, rather than to unseen matter. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda Galaxy and a mass-to-light ratio of 50; in 1940, Oort discovered and wrote about the large non-visible halo of NGC 3115.
1970s
The hypothesis of dark matter largely took root in the 1970s. Several different observations were synthesized to argue that galaxies should be surrounded by halos of unseen matter. In two papers that appeared in 1974, this conclusion was drawn in tandem by independent groups: in Princeton, New Jersey, by Jeremiah Ostriker, Jim Peebles, and Amos Yahil, and in Tartu, Estonia, by Jaan Einasto, Enn Saar, and Ants Kaasik.
One of the observations that served as evidence for the existence of galactic halos of dark matter was the shape of galaxy rotation curves. These observations were done in optical and radio astronomy. In optical astronomy, Vera Rubin and Kent Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy.
At the same time, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H) often extends to much greater galactic distances than can be observed as collective starlight, expanding the sampled distances for rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the telescope at Green Bank and the dish at Jodrell Bank already showed the H rotation curve did not trace the decline expected from Keplerian orbits.
As more sensitive receivers became available, Roberts & Whitehurst (1975) were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii; that paper's Figure 16 combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H spectroscopy was being developed. Rogstad & Shostak (1972) published H rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H disks. In 1978, Albert Bosma showed further evidence of flat rotation curves using data from the Westerbork Synthesis Radio Telescope.
By the late 1970s the existence of dark matter halos around galaxies was widely recognized as real, and became a major unsolved problem in astronomy.
1980–1990s
A stream of observations in the 1980–1990s supported the presence of dark matter. is notable for the investigation of 967 spirals. The evidence for dark matter also included gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background.
According to the current consensus among cosmologists, dark matter is composed primarily of some type of not-yet-characterized subatomic particle.
The search for this particle, by a variety of means, is one of the major efforts in particle physics.
Technical definition
In standard cosmological calculations, "matter" means any constituent of the universe whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to "radiation", which scales as the inverse fourth power of the scale factor and a cosmological constant, which does not change with respect to (). The different scaling factors for matter and radiation are a consequence of radiation redshift. For example, after doubling the diameter of the observable Universe via cosmic expansion, the scale, , has doubled. The energy of the cosmic microwave background radiation has been halved (because the wavelength of each photon has doubled); the energy of ultra-relativistic particles, such as early-era standard-model neutrinos, is similarly halved. The cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons". Context will usually indicate which meaning is intended.
Observational evidence
Galaxy rotation curves
The arms of spiral galaxies rotate around their galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then the galaxy can be modelled as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Third Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat or even increases as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there may be a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Velocity dispersions
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
From the scatter in radial velocities of the galaxies within clusters
From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile.
Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity).
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
Gravitational lensing
One of the consequences of general relativity is the gravitational lens. Gravitational lensing occurs when massive objects between a source of light and the observer act as a lens to bend light from this source. Lensing does not depend on the properties of the mass; it only requires there to be a mass. The more massive an object, the more lensing is observed. An example is a cluster of galaxies lying between a more distant source such as a quasar and an observer. In this case, the galaxy cluster will lens the quasar.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the weak regime, lensing does not distort background galaxies into arcs, causing minute distortions instead. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The measured mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements.
Cosmic microwave background
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the cosmic microwave background (CMB) by its gravitational potential (mainly on large scales) and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the CMB.
The CMB is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The locations of these peaks depend on cosmological parameters. Matching theory to data, therefore, constrains cosmological parameters.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks.
After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003–2012, and even more precisely by the Planck spacecraft in 2013–2015. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).
Structure formation
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
Bullet Cluster
The Bullet Cluster is the result of a recent collision of two galaxy clusters. It is of particular note because the location of the center of mass as measured by gravitational lensing is different from the location of the center of mass of visible matter. This is difficult for modified gravity theories, which generally predict lensing around visible matter, to explain. Standard dark matter theory however has no issue: the hot, visible gas in each cluster would be cooled and slowed down by electromagnetic interactions, while dark matter (which does not interact electromagnetically) would not. This leads to the dark matter separating from the visible gas, producing the separate lensing peak as observed.
Type Ia supernova distance measurements
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (). The measured dark energy density is ; the observed ordinary (baryonic) matter energy density is and the energy density of radiation is negligible. This leaves a missing which nonetheless behaves like matter (see technical definition section above)dark matter.
Sky surveys and baryon acoustic oscillations
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Redshift-space distortions
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model.
Lyman-alpha forest
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Theoretical classifications
Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, and indicate how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion. This distance is called the free streaming length (FSL). The categories of dark matter are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot if their FSL is much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.
The significance of the free streaming length is that the universe began with some primordial density fluctuations from the Big Bang (in turn arising from quantum fluctuations at the microscale). Particles from overdense regions will naturally spread to underdense regions, but because the universe is expanding quickly, there is a time limit for them to do so. Faster particles (hot dark matter) can beat the time limit while slower particles cannot. The particles travel a free streaming length's worth of distance within the time limit; therefore this length sets a minimum scale for later structure formation. Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies, while the reverse is true for cold dark matter.
Deep-field observations show that galaxies formed first, followed by clusters and superclusters as galaxies clump together, and therefore that most dark matter is cold. This is also the reason why neutrinos, which move at nearly the speed of light and therefore would fall under hot dark matter, cannot make up the bulk of dark matter.
Composition
The identity of dark matter is unknown, but there are many hypotheses about what dark matter could consist of, as set out in the table below.
Baryonic matter
Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. Most of the ordinary matter familiar to astronomers, including planets, brown dwarfs, red dwarfs, visible stars, white dwarfs, neutron stars, and black holes, fall into this category. A black hole would ingest both baryonic and non-baryonic matter that comes close enough to its event horizon; afterwards, the distinction between the two is lost.
These massive objects that are hard to detect are collectively known as MACHOs. Some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter.
However, multiple lines of evidence suggest the majority of dark matter is not baryonic:
Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density.
Astronomical searches for gravitational microlensing in the Milky Way found at most only a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background by WMAP and Planck indicate that around five-sixths of the total matter is in a form that only interacts significantly with ordinary matter or photons through gravitational effects.
Non-baryonic matter
If baryonic matter cannot make up most of dark matter, then dark matter must be non-baryonic. There are two main candidates for non-baryonic dark matter: new hypothetical particles and primordial black holes.
Unlike baryonic matter, nonbaryonic particles do not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is felt only via its gravitational effects (such as weak lensing). In addition, some dark matter candidates can interact with themselves (self-interacting dark matter) or with ordinary particles (e.g. WIMPs or Weakly Interacting Massive Particles), possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection). Candidates abound (see the table above), each with their own strengths and weaknesses.
Undiscovered massive particles
There exists no formal definition of a Weakly Interacting Massive Particle, but broadly, it is an elementary particle which interacts via gravity and any other force (or forces) which is as weak as or weaker than the weak nuclear force, but also non-vanishing in strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter. Obtaining the correct abundance of dark matter today via thermal production requires a self-annihilation cross section of , which is roughly what is expected for a new particle in the 100 GeV mass range that interacts via the electroweak force.
Because supersymmetric extensions of the Standard Model of particle physics readily predict a new particle with these properties, this apparent coincidence is known as the "WIMP miracle", and a stable supersymmetric partner has long been a prime explanation for dark matter. Experimental efforts to detect WIMPs include the search for products of WIMP annihilation, including gamma rays, neutrinos and cosmic rays in nearby galaxies and galaxy clusters; direct detection experiments designed to measure the collision of WIMPs with nuclei in the laboratory, as well as attempts to directly produce WIMPs in colliders, such as the Large Hadron Collider at CERN.
In the early 2010s, results from direct-detection experiments along with the lack of evidence for supersymmetry at the Large Hadron Collider (LHC) experiment have cast doubt on the simplest WIMP hypothesis.
Undiscovered ultralight particles
Axions are hypothetical elementary particles originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate.
The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2.
Because axions have extremely low mass, their de Broglie wavelength is very large, in turn meaning that quantum effects could help resolve the small-scale problems of the Lambda-CDM model. A single ultralight axion with a decay constant at the grand unified theory scale provides the correct relic density without fine-tuning.
Axions as a dark matter candidate has gained in popularity in recent years, because of the non-detection of WIMPS.
Primordial black holes
Primordial black holes are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression typically needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes and also not classified as baryonic dark matter.
The idea that black holes could form in the early universe was first suggested by Yakov Zeldovich and Igor Dmitriyevich Novikov in 1967, and independently by Stephen Hawking in 1971. It quickly became clear that such black holes might account for at least part of dark matter. Primordial black holes as a dark matter candidate has the major advantage that it is based on a well-understood theory (General Relativity) and objects (black holes) that are already known to exist. However, producing primordial black holes requires exotic cosmic inflation or physics beyond the standard model of particle physics, and might also require fine-tuning. Primordial black holes can also span nearly the entire possible mass range, from atom-sized to supermassive.
The idea that primordial black holes make up dark matter gained prominence in 2015 following results of gravitational wave measurements which detected the merger of intermediate-mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses), which suggests that the detected black holes might be primordial. A later survey of about a thousand supernovae detected no gravitational lensing events, when about eight would be expected if intermediate-mass primordial black holes above a certain mass range accounted for over 60% of dark matter. However, that study assumed that all black holes have the same or similar mass to the LIGO/Virgo mass range, which might not be the case (as suggested by subsequent James Webb Space Telescope observations).
The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum, suggesting that tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests that tiny black holes do not exist.
Nonetheless, there still exists a largely unconstrained mass range smaller than that which can be limited by optical microlensing observations, where primordial black holes may account for all dark matter.
Dark matter aggregation and dense dark matter objects
If dark matter is composed of weakly interacting particles, then an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot, because of two factors:
It lacks an efficient means to lose energy
Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
It lacks a diversity of interactions needed to form structures
Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation).
Detection of dark matter particles
If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs have been the main search candidates, axions have drawn renewed attention, with the Axion Dark Matter Experiment (ADMX) searches for axions and many more planned in the future. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil, the nucleus will emit energy in the form of scintillation light or phonons as they pass through sensitive detection apparatus. To do so effectively, it is crucial to maintain an extremely low background, which is the reason why such experiments typically operate deep underground, where interference from cosmic rays is minimized. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include such projects as CDMS, CRESST, EDELWEISS, and EURECA, while noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO, which use alternative methods in their attempts to detect dark matter.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of the Milky Way) two dark matter particles could annihilate to produce gamma rays or Standard Model particle–antiparticle pairs. Alternatively, if a dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in the Milky Way and other galaxies. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow.
The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In 2009, an as yet unexplained surplus of gamma rays from the Milky Way's galactic center was found in Fermi data. This Galactic Center GeV excess might be due to dark matter annihilation or to a population of pulsars. In April 2012, an analysis of previously available data from Fermi's Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies.
The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed.
In 2013, results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays which could be due to dark matter annihilation.
Collider searches for dark matter
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.
Alternative hypotheses
Because dark matter has not yet been identified, many other hypotheses have emerged aiming to explain the same observational phenomena without introducing a new unknown type of matter. The theory underpinning most observational evidence for dark matter, general relativity, is well-tested on Solar System scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can in principle conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor–vector–scalar gravity (TeVeS), f(R) gravity, negative mass, dark fluid, and entropic gravity. Alternative theories abound.
A problem with alternative hypotheses is that observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them in the absence of dark matter is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity and a 2020 measurement of a unique MOND effect.
The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter present in the universe.
In popular culture
Dark matter regularly appears as a topic in hybrid periodicals that cover both factual scientific topics and science fiction, and dark matter itself has been referred to as "the stuff of science fiction".
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties, thus becoming inconsistent with the hypothesized properties of dark matter in physics and cosmology. For example:
Dark matter serves as a plot device in the 1995 X-Files episode "Soft Light".
A dark-matter-inspired substance known as "Dust" features prominently in Philip Pullman's His Dark Materials trilogy.
Beings made of dark matter are antagonists in Stephen Baxter's Xeelee Sequence.
More broadly, the phrase "dark matter" is used metaphorically in fiction to evoke the unseen or invisible.
Gallery
See also
Related theories
Density wave theory – A theory in which waves of compressed gas, which move slower than the galaxy, maintain galaxy's structure
Experiments
, a search apparatus
, large underground dark matter detector
, a space mission
, a research program
, astrophysical simulations
, a particle accelerator research infrastructure
Dark matter candidates
Weakly interacting slim particle (WISP)Low-mass counterpart to WIMP
Other
Luminiferous aether – A once theorized invisible and infinite material with no interaction with physical objects, used to explain how light could travel through a vacuum (now disproven)
Notes
References
Further reading
(Recommended on BookAuthrority site))
Weiss, Rainer, (July/August 2023) "The Dark Universe Comes into Focus" Scientific American, vol. 329, no. 1, pp. 7–8.
External links
Celestial mechanics
Large-scale structure of the cosmos
Physics beyond the Standard Model
Astroparticle physics
Exotic matter
Matter
Concepts in astronomy
Unsolved problems in astronomy
Articles containing video clips
Dark concepts in astrophysics | Dark matter | [
"Physics",
"Astronomy"
] | 9,000 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Astroparticle physics",
"Unsolved problems in physics",
"Classical mechanics",
"Astrophysics",
"Dark concepts in astrophysics",
"Astronomical controversies",
"Particle physics",
"Exotic matter",
"Celestial mechanics",
... |
8,663 | https://en.wikipedia.org/wiki/Daniel%20Gabriel%20Fahrenheit | Daniel Gabriel Fahrenheit FRS (; ; 24 May 1686 – 16 September 1736) was a physicist, inventor, and scientific instrument maker, born in Poland to a family of German extraction. Fahrenheit invented thermometers accurate and consistent enough to allow the comparison of temperature measurements between different observers using different instruments. Fahrenheit is also credited with inventing mercury-in-glass thermometers more accurate and superior to spirit-filled thermometers at the time. The popularity of his thermometers led to the widespread adoption of his Fahrenheit scale attached to his instruments.
Biography
Early life
Fahrenheit was born in Danzig (Gdańsk), then in the Polish–Lithuanian Commonwealth. The Fahrenheits were a German Hanse merchant family who had lived in several Hanseatic cities. Fahrenheit's great-grandfather had lived in Rostock, and research suggests that the Fahrenheit family originated in Hildesheim. Daniel's grandfather moved from Kneiphof in Königsberg (then in the Duchy of Prussia) to Danzig and settled there as a merchant in 1650. His son, Daniel Fahrenheit (the father of Daniel Gabriel), married Concordia Schumann, the daughter of a well-known Danzig business family. Daniel was the eldest of the five Fahrenheit children (two sons, three daughters) who survived childhood. His sister, Virginia Elisabeth Fahrenheit, married Benjamin Krüger and was the mother of Benjamin Ephraim Krüger, a clergyman and playwright.
As a young adult, Fahrenheit "showed a particular desire for studying," and was scheduled to enroll in the Danzig Gymnasium. But on 14 August 1701, his parents died after eating poisonous mushrooms. Fahrenheit, along with two brothers and sisters, was placed under guardianship. In 1702, Fahrenheit's guardians enrolled him in a bookkeeping course and sent him to a four-year merchant trade apprenticeship in Amsterdam.
Upon completing his apprenticeship, Fahrenheit ran off and began a period of travel through the Holy Roman Empire, Sweden, and Denmark in 1707. At the request of his guardians, a warrant was issued for his arrest with the intention of placing him into the service of the Dutch East India company.
Work with thermometers, Fahrenheit scale
By around 1706, Fahrenheit was manufacturing and shipping barometers and spirit-filled thermometers using the . In 1708, Fahrenheit met with the mayor of Copenhagen and astronomer, Ole Rømer, and was introduced to Rømer's temperature scale and his methods for making thermometers. Rømer told Fahrenheit that demand for accurate thermometers was high. The visit inspired Fahrenheit to try to improve his own offerings. Perhaps not coincidentally, Fahrenheit's arrest warrant was dropped around the time of his meeting with Rømer.
In 1709, Fahrenheit returned to Danzig and took observations using his barometers and thermometers, traveled more in 1710 and returned to Danzig in 1711 to settle his parents' estate. After additional travel to Königsberg and Mitau in 1711, he returned to Danzig in 1712 and stayed there for two years. During this period he worked on solving technical problems with his thermometers.
Fahrenheit began experimenting with mercury thermometers in 1713. Also by this time, Fahrenheit was using a modified version of Rømer's scale for his thermometers which would later evolve into his own Fahrenheit scale. In 1714, Fahrenheit left Danzig for Berlin and Dresden to work closely with the glass-blowers there. In that year Christian Wolff wrote about Fahrenheit's thermometers in a journal after receiving a pair of his alcohol-based devices, helping to boost Fahrenheit's reputation in the scientific community.
In addition to his interest in meteorological instruments, Fahrenheit also worked on his ideas for a mercury clock, a perpetual motion machine, and a heliostat around 1715. He struck up a correspondence with Leibniz about some of these projects. From the exchange of letters, we learn that Fahrenheit was running out of money while working on his projects and asked Leibniz for help obtaining a paid post so he could continue his work.
In 1717 or 1718, Fahrenheit returned to Amsterdam and began selling barometers, areometers, and his mercury and alcohol-based thermometers commercially. By 1721, Fahrenheit had perfected the process of crafting and standardizing his thermometers. The superiority of his mercury thermometers over alcohol-based thermometers made them very popular, leading to the widespread adoption of his Fahrenheit scale, the measurement system he developed and used for his thermometers.
Later life and controversy
Fahrenheit spent the remainder of his life in Amsterdam. From 1718 onward, he lectured in chemistry in Amsterdam. He visited England in 1724 and was elected into the Fellow of the Royal Society on May 5. In that year, he published five papers in Latin for the Royal Society's scientific journal, Philosophical Transactions, on various topics. In his second paper, "Experimenta et observationes de congelatione aquæ in vacuo factæ", he provides a description of his thermometers and the reference points he used for calibrating them. For two centuries, this document was the only description of Fahrenheit's process for making thermometers. In the 20th century, Ernst Cohen uncovered correspondences between Fahrenheit and Herman Boerhaave which cast considerable doubt on the veracity of Fahrenheit's article explaining the reference points for his scale and that, in fact, Fahrenheit's scale was largely derived from Rømer's scale. In his book, The History of the Thermometer and Its Use in Meteorology, W. E. Knowles Middleton writes,
From August 1736 to his death, Fahrenheit stayed in the house of Johannes Frisleven at Plein Square in The Hague in connection with an application for a patent at the States of Holland and West Friesland. At the beginning of September, he became ill and on the 7th his health had deteriorated to such an extent that he had notary Willem Ruijsbroek come to draw up his will. On the 11th, the notary came by again to make some changes. Five days after that, Fahrenheit died at the age of fifty. Four days later, he received the fourth-class funeral of one who is classified as destitute, in the Kloosterkerk in The Hague (the Cloister or Monastery Church).
Fahrenheit scale
According to Fahrenheit's 1724 article, he determined his scale by reference to three fixed points of temperature. The lowest temperature was achieved by preparing a frigorific mixture of ice, water, and a salt ("ammonium chloride or even sea salt"), and waiting for the eutectic system to reach equilibrium temperature. The thermometer then was placed into the mixture and the liquid in the thermometer allowed to descend to its lowest point. The thermometer's reading there was taken as . The second reference point was selected as the reading of the thermometer when it was placed in still water when ice was just forming on the surface. This was assigned as . The third calibration point, taken as , was selected as the thermometer's reading when the instrument was placed under the arm or in the mouth.
Fahrenheit came up with the idea that mercury boils around 300 degrees on this temperature scale. Work by others showed that water boils about 180 degrees above its freezing point. The Fahrenheit scale later was redefined to make the freezing-to-boiling interval exactly 180 degrees, a convenient value as 180 is a highly composite number, meaning that it is evenly divisible into many fractions. It is because of the scale's redefinition that normal mean body temperature today is taken as 98.6 degrees, whereas it was 96 degrees on Fahrenheit's original scale.
The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in English-speaking countries until the 1970s, presently mostly replaced by the Celsius scale long used in the rest of the world, apart from the United States, where temperatures and weather reports are still broadcast in Fahrenheit.
See also
Fahrenheit hydrometer
People from Gdańsk (Danzig)
Anders Celsius
Lord Kelvin
References
Further reading
(Latin)
(Czech)
(Russian)
External links
Letter from Daniel Gabriel Fahrenheit (scan) to Carl Linnaeus, 7 May 1736 n.s.,
Fahrenheit's papers in the Royal Society Publishing
1686 births
1736 deaths
Immigrants to the Dutch Republic
Fellows of the Royal Society
17th-century people from the Polish–Lithuanian Commonwealth
Scientists from Gdańsk
Creators of temperature scales | Daniel Gabriel Fahrenheit | [
"Physics"
] | 1,895 | [
"Scales of temperature",
"Physical quantities",
"Creators of temperature scales"
] |
8,667 | https://en.wikipedia.org/wiki/Double-slit%20experiment | In modern physics, the double-slit experiment demonstrates that light and matter can exhibit behavior of both classical particles and classical waves. This type of experiment was first performed by Thomas Young in 1801, as a demonstration of the wave behavior of visible light. In 1927, Davisson and Germer and, independently, George Paget Thomson and his research student Alexander Reid demonstrated that electrons show the same behavior, which was later extended to atoms and molecules. Thomas Young's experiment with light was part of classical physics long before the development of quantum mechanics and the concept of wave–particle duality. He believed it demonstrated that the Christiaan Huygens' wave theory of light was correct, and his experiment is sometimes referred to as Young's experiment or Young's slits.
The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves (the wave is typically made of many photons and better referred to as a wave front, not to be confused with the wave properties of the individual photon) that later combine into a single wave. Changes in the path-lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a beam splitter.
In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. These results demonstrate the principle of wave–particle duality.
Other atomic-scale entities, such as electrons, are found to exhibit the same behavior when fired towards a double slit. Additionally, the detection of individual discrete impacts is observed to be inherently probabilistic, which is inexplicable using classical mechanics.
The experiment can be done with entities much larger than electrons and photons, although it becomes more difficult as size increases. The largest entities for which the double-slit experiment has been performed were molecules that each comprised 2000 atoms (whose total mass was 25,000 atomic mass units).
The double-slit experiment (and its variations) has become a classic for its clarity in expressing the central puzzles of quantum mechanics. Richard Feynman called it "a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery [of quantum mechanics]."
Overview
If light consisted strictly of ordinary or classical particles, and these particles were fired in a straight line through a slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this "single-slit experiment" is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. The top portion of the image shows the central portion of the pattern formed when a red laser illuminates a slit and, if one looks carefully, two faint side bands. More bands can be seen with a more highly refined apparatus. Diffraction explains the pattern as being the result of the interference of light waves from the slit.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. (See the bottom photograph to the right.)
When Thomas Young (1773–1829) first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries.
However, the later discovery of the photoelectric effect demonstrated that under different circumstances, light can behave as if it is composed of discrete particles. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. He also proposed (as a thought experiment) that if detectors were placed before each slit, the interference pattern would disappear.
The Englert–Greenberger duality relation provides a detailed treatment of the mathematics of double-slit interference in the context of quantum mechanics.
A low-intensity double-slit experiment was first performed by G. I. Taylor in 1909, by reducing the level of incident light until photon emission/absorption events were mostly non-overlapping.
A slit interference experiment was not performed with anything other than light until 1961, when Claus Jönsson of the University of Tübingen performed it with coherent electron beams and multiple slits. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi performed a related experiment using single electrons from a coherent source and a biprism beam splitter, showing the statistical nature of the buildup of the interference pattern, as predicted by quantum theory. In 2002, the single-electron version of the experiment was voted "the most beautiful experiment" by readers of Physics World. Since that time a number of related experiments have been published, with a little controversy.
In 2012, Stefano Frabboni and co-workers sent single electrons onto nanofabricated slits (about 100 nm wide) and, by detecting the transmitted electrons with a single-electron detector, they could show the build-up of a double-slit interference pattern. Many related experiments involving the coherent interference have been performed; they are the basis of modern electron diffraction, microscopy and high resolution imaging.
In 2018, single particle interference was demonstrated for antimatter in the Positron Laboratory (L-NESS, Politecnico di Milano) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Variations of the experiment
Interference from individual particles
An important version of this experiment involves single particle detection. Illuminating the double-slit with a low intensity results in single particles being detected as white dots on the screen. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (see the image below).
This demonstrates the wave–particle duality, which states that all matter exhibits both wave and particle properties: The particle is measured as a single pulse at a single position, while the modulus squared of the wave describes the probability of detecting the particle at a specific place on the screen giving a statistical interference pattern. This phenomenon has been shown to occur with photons, electrons, atoms, and even some molecules: with buckminsterfullerene () in 2001, with 2 molecules of 430 atoms ( and ) in 2011, and with molecules of up to 2000 atoms in 2019.
In addition to interference patterns built up from single particles, up to 4 entangled photons can also show interference patterns.
Mach-Zehnder interferometer
The Mach–Zehnder interferometer can be seen as a simplified version of the double-slit experiment. Instead of propagating through free space after the two slits, and hitting any position in an extended screen, in the interferometer the photons can only propagate via two paths, and hit two discrete photodetectors. This makes it possible to describe it via simple linear algebra in dimension 2, rather than differential equations.
A photon emitted by the laser hits the first beam splitter and is then in a superposition between the two possible paths. In the second beam splitter these paths interfere, causing the photon to hit the photodetector on the right with probability one, and the photodetector on the bottom with probability zero. Blocking one of the paths, or equivalently detecting the presence of a photon on a path eliminates interference between the paths: both photodetectors will be hit with probability 1/2. This indicates that after the first beam splitter the photon does not take one path or another, but rather exists in a quantum superposition of the two paths.
"Which-way" experiments and the principle of complementarity
A well-known thought experiment predicts that if particle detectors are positioned at the slits, showing through which slit a photon goes, the interference pattern will disappear. This which-way experiment illustrates the complementarity principle that photons can behave as either particles or waves, but cannot be observed as both at the same time.
Despite the importance of this thought experiment in the history of quantum mechanics (for example, see the discussion on Einstein's version of this experiment), technically feasible realizations of this experiment were not proposed until the 1970s. (Naive implementations of the textbook thought experiment are not possible because photons cannot be detected without absorbing the photon.) Currently, multiple experiments have been performed illustrating various aspects of complementarity.
An experiment performed in 1987 produced results that demonstrated that partial information could be obtained regarding which path a particle had taken without destroying the interference altogether. This "wave-particle trade-off" takes the form of an inequality relating the visibility of the interference pattern and the distinguishability of the which-way paths.
Delayed choice and quantum eraser variations
Wheeler's delayed-choice experiments demonstrate that extracting "which path" information after a particle passes through the slits can seem to retroactively alter its previous behavior at the slits.
Quantum eraser experiments demonstrate that wave behavior can be restored by erasing or otherwise making permanently unavailable the "which path" information.
A simple do-it-at-home illustration of the quantum eraser phenomenon was given in an article in Scientific American. If one sets polarizers before each slit with their axes orthogonal to each other, the interference pattern will be eliminated. The polarizers can be considered as introducing which-path information to each beam. Introducing a third polarizer in front of the detector with an axis of 45° relative to the other polarizers "erases" this information, allowing the interference pattern to reappear. This can also be accounted for by considering the light to be a classical wave, and also when using circular polarizers and single photons. Implementations of the polarizers using entangled photon pairs have no classical explanation.
Weak measurement
In a highly publicized experiment in 2012, researchers claimed to have identified the path each particle had taken without any adverse effects at all on the interference pattern generated by the particles. In order to do this, they used a setup such that particles coming to the screen were not from a point-like source, but from a source with two intensity maxima. However, commentators such as Svensson have pointed out that there is in fact no conflict between the weak measurements performed in this variant of the double-slit experiment and the Heisenberg uncertainty principle. Weak measurement followed by post-selection did not allow simultaneous position and momentum measurements for each individual particle, but rather allowed measurement of the average trajectory of the particles that arrived at different positions. In other words, the experimenters were creating a statistical map of the full trajectory landscape.
Other variations
In 1967, Pfleegor and Mandel demonstrated two-source interference using two separate lasers as light sources.
It was shown experimentally in 1972 that in a double-slit system where only one slit was open at any time, interference was nonetheless observed provided the path difference was such that the detected photon could have come from either slit. The experimental conditions were such that the photon density in the system was much less than 1.
In 1991, Carnal and Mlynek performed the classic Young's double slit experiment with metastable helium atoms passing through micrometer-scale slits in gold foil.
In 1999, a quantum interference experiment (using a diffraction grating, rather than two slits) was successfully performed with buckyball molecules (each of which comprises 60 carbon atoms). A buckyball is large enough (diameter about 0.7 nm, nearly half a million times larger than a proton) to be seen in an electron microscope.
In 2002, an electron field emission source was used to demonstrate the double-slit experiment. In this experiment, a coherent electron wave was emitted from two closely located emission sites on the needle apex, which acted as double slits, splitting the wave into two coherent electron waves in a vacuum. The interference pattern between the two electron waves could then be observed. In 2017, researchers performed the double-slit experiment using light-induced field electron emitters. With this technique, emission sites can be optically selected on a scale of ten nanometers. By selectively deactivating (closing) one of the two emissions (slits), researchers were able to show that the interference pattern disappeared.
In 2005, E. R. Eliel presented an experimental and theoretical study of the optical transmission of a thin metal screen perforated by two subwavelength slits, separated by many optical wavelengths. The total intensity of the far-field double-slit pattern is shown to be reduced or enhanced as a function of the wavelength of the incident light beam.
In 2012, researchers at the University of Nebraska–Lincoln performed the double-slit experiment with electrons as described by Richard Feynman, using new instruments that allowed control of the transmission of the two slits and the monitoring of single-electron detection events. Electrons were fired by an electron gun and passed through one or two slits of 62 nm wide × 4 μm tall.
In 2013, a quantum interference experiment (using diffraction gratings, rather than two slits) was successfully performed with molecules that each comprised 810 atoms (whose total mass was over 10,000 atomic mass units). The record was raised to 2000 atoms (25,000 amu) in 2019.
Hydrodynamic pilot wave analogs
Hydrodynamic analogs have been developed that can recreate various aspects of quantum mechanical systems, including single-particle interference through a double-slit. A silicone oil droplet, bouncing along the surface of a liquid, self-propels via resonant interactions with its own wave field. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet's interaction with its own ripples, which form what is known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles – including behaviors customarily taken as evidence that elementary particles are spread through space like waves, without any specific location, until they are measured.
Behaviors mimicked via this hydrodynamic pilot-wave system include quantum single particle diffraction, tunneling, quantized orbits, orbital level splitting, spin, and multimodal statistics. It is also possible to infer uncertainty relations and exclusion principles. Videos are available illustrating various features of this system. (See the External links.)
However, more complicated systems that involve two or more particles in superposition are not amenable to such a simple, classically intuitive explanation. Accordingly, no hydrodynamic analog of entanglement has been developed. Nevertheless, optical analogs are possible.
Double-slit experiment on time
In 2023, an experiment was reported recreating an interference pattern in time by shining a pump laser pulse at a screen coated in indium tin oxide (ITO) which would alter the properties of the electrons within the material due to the Kerr effect, changing it from transparent to reflective for around 200 femtoseconds long where a subsequent probe laser beam hitting the ITO screen would then see this temporary change in optical properties as a slit in time and two of them as a double slit with a phase difference adding up destructively or constructively on each frequency component resulting in an interference pattern. Similar results have been obtained classically on water waves.
Classical wave-optics formulation
Much of the behaviour of light can be modelled using classical wave theory. The Huygens–Fresnel principle is one such model; it states that each point on a wavefront generates a secondary wavelet, and that the disturbance at any subsequent point can be found by summing the contributions of the individual wavelets at that point. This summation needs to take into account the phase as well as the amplitude of the individual wavelets. Only the intensity of a light field can be measured—this is proportional to the square of the amplitude.
In the double-slit experiment, the two slits are illuminated by the quasi-monochromatic light of a single laser. If the width of the slits is small enough (much less than the wavelength of the laser light), the slits diffract the light into cylindrical waves. These two cylindrical wavefronts are superimposed, and the amplitude, and therefore the intensity, at any point in the combined wavefronts depends on both the magnitude and the phase of the two wavefronts. The difference in phase between the two waves is determined by the difference in the distance travelled by the two waves.
If the viewing distance is large compared with the separation of the slits (the far field), the phase difference can be found using the geometry shown in the figure below right. The path difference between two waves travelling at an angle is given by:
Where d is the distance between the two slits. When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximum, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel and the summed intensity is zero. This effect is known as interference. The interference fringe maxima occur at angles
where λ is the wavelength of the light. The angular spacing of the fringes, , is given by
The spacing of the fringes at a distance from the slits is given by
For example, if two slits are separated by 0.5 mm (), and are illuminated with a 0.6 μm wavelength laser (), then at a distance of 1 m (), the spacing of the fringes will be 1.2 mm.
If the width of the slits is appreciable compared to the wavelength, the Fraunhofer diffraction equation is needed to determine the intensity of the diffracted light as follows:
where the sinc function is defined as sinc(x) = sin(x)/x for x ≠ 0, and sinc(0) = 1.
This is illustrated in the figure above, where the first pattern is the diffraction pattern of a single slit, given by the function in this equation, and the second figure shows the combined intensity of the light diffracted from the two slits, where the function represents the fine structure, and the coarser structure represents diffraction by the individual slits as described by the function.
Similar calculations for the near field can be made by applying the Fresnel diffraction equation, which implies that as the plane of observation gets closer to the plane in which the slits are located, the diffraction patterns associated with each slit decrease in size, so that the area in which interference occurs is reduced, and may vanish altogether when there is no overlap in the two diffracted patterns.
Path-integral formulation
The double-slit experiment can illustrate the path integral formulation of quantum mechanics provided by Feynman. The path integral formulation replaces the classical notion of a single, unique trajectory for a system, with a sum over all possible trajectories. The trajectories are added together by using functional integration.
Each path is considered equally likely, and thus contributes the same amount. However, the phase of this contribution at any given point along the path is determined by the action along the path:
All these contributions are then added together, and the magnitude of the final result is squared, to get the probability distribution for the position of a particle:
As is always the case when calculating probability, the results must then be normalized by imposing:
The probability distribution of the outcome is the normalized square of the norm of the superposition, over all paths from the point of origin to the final point, of waves propagating proportionally to the action along each path. The differences in the cumulative action along the different paths (and thus the relative phases of the contributions) produces the interference pattern observed by the double-slit experiment. Feynman stressed that his formulation is merely a mathematical description, not an attempt to describe a real process that we can measure.
Interpretations of the experiment
Like the Schrödinger's cat thought experiment, the double-slit experiment is often used to highlight the differences and similarities between the various interpretations of quantum mechanics.
Standard quantum physics
The standard interpretation of the double slit experiment is that the pattern is a wave phenomenon, representing interference between two probability amplitudes, one for each slit. Low intensity experiments demonstrate that the pattern is filled in one particle detection at a time. Any change to the apparatus designed to detect a particle at a particular slit alters the probability amplitudes and the interference disappears. This interpretation is independent of any conscious observer.
Complementarity
Niels Bohr interpreted quantum experiments like the double-slit experiment using the concept of complementarity. In Bohr's view quantum systems are not classical, but measurements can only give classical results. Certain pairs of classical properties will never be observed in a quantum system simultaneously: the interference pattern of waves in the double slit experiment will disappear if particles are detected at the slits. Modern quantitative versions of the concept allow for a continuous tradeoff between the visibility of the interference fringes and the probability of particle detection at a slit.
Copenhagen interpretation
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. The term "Copenhagen interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and some form of complementarity principle. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. In the Copenhagen interpretation, complementarity means a particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time. In a Copenhagen-type view, the question of which slit a particle travels through has no meaning when there is no detector.
Relational interpretation
According to the relational interpretation of quantum mechanics, first proposed by Carlo Rovelli, observations such as those in the double-slit experiment result specifically from the interaction between the observer (measuring device) and the object being observed (physically interacted with), not any absolute property possessed by the object. In the case of an electron, if it is initially "observed" at a particular slit, then the observer–particle (photon–electron) interaction includes information about the electron's position. This partially constrains the particle's eventual location at the screen. If it is "observed" (measured with a photon) not at a particular slit but rather at the screen, then there is no "which path" information as part of the interaction, so the electron's "observed" position on the screen is determined strictly by its probability function. This makes the resulting pattern on the screen the same as if each individual electron had passed through both slits.
Many-worlds interpretation
As with Copenhagen, there are multiple variants of the many-worlds interpretation. The unifying theme is that physical reality is identified with a wavefunction, and this wavefunction always evolves unitarily, i.e., following the Schrödinger equation with no collapses. Consequently, there are many parallel universes, which only interact with each other through interference. David Deutsch argues that the way to understand the double-slit experiment is that in each universe the particle travels through a specific slit, but its motion is affected by interference with particles in other universes, and this interference creates the observable fringes. David Wallace, another advocate of the many-worlds interpretation, writes that in the familiar setup of the double-slit experiment the two paths are not sufficiently separated for a description in terms of parallel universes to make sense.
De Broglie–Bohm theory
An alternative to the standard understanding of quantum mechanics, the De Broglie–Bohm theory states that particles also have precise locations at all times, and that their velocities are defined by the wave-function. So while a single particle will travel through one particular slit in the double-slit experiment, the so-called "pilot wave" that influences it will travel through both. The two slit de Broglie-Bohm trajectories were first calculated by Chris Dewdney while working with Chris Philippidis and Basil Hiley at Birkbeck College (London). The de Broglie-Bohm theory produces the same statistical results as standard quantum mechanics, but dispenses with many of its conceptual difficulties by adding complexity through an ad hoc quantum potential to guide the particles.
While the model is in many ways similar to Schrödinger equation, it is known to fail for relativistic cases and does not account for features such as particle creation or annihilation in quantum field theory. Many authors such as nobel laureates Werner Heisenberg, Sir Anthony James Leggett and Sir Roger Penrose have criticized it for not adding anything new.
More complex variants of this type of approach have appeared, for instance the three wave hypothesis of Ryszard Horodecki as well as other complicated combinations of de Broglie and Compton waves. To date there is no evidence that these are useful.
See also
Aharonov-Bohm effect
Complementarity (physics)
Delayed-choice quantum eraser
Diffraction from slits
Dual-polarization interferometry
Elitzur–Vaidman bomb tester
N-slit interferometer
Matter wave
Photon polarization
Quantum coherence
Schrödinger's cat
Young's interference experiment
Measurement problem
Hydrodynamic quantum analogs
Pilot wave theory
References
Further reading
External links
Double slit interference lecture by Walter Lewin of MIT
Interactive animations
Huygens and interference
Single particle experiments
Website with the movie and other information from the first single electron experiment by Merli, Missiroli, and Pozzi.
Movie showing single electron events build up to form an interference pattern in double-slit experiments. Several versions with and without narration (File size = 3.6 to 10.4 MB) (Movie Length = 1m 8s)
Freeview video 'Electron Waves Unveil the Microcosmos' A Royal Institution Discourse by Akira Tonomura provided by the Vega Science Trust
Hitachi website that provides background on Tonomura video and link to the video
Hydrodynamic analog
"Single-particle interference observed for macroscopic objects"
Pilot-Wave Hydrodynamics: Supplemental Video
Through the Wormhole: Yves Couder . Explains Wave/Particle Duality via Silicon Droplets
Computer simulations
Java demonstration of Young's double slit interference
A simulation that runs in Mathematica Player, in which the number of quantum particles, the frequency of the particles, and the slit separation can be independently varied
Foundational quantum physics
Physics experiments
Wave mechanics | Double-slit experiment | [
"Physics"
] | 5,874 | [
"Physical phenomena",
"Physics experiments",
"Foundational quantum physics",
"Classical mechanics",
"Quantum mechanics",
"Waves",
"Wave mechanics",
"Experimental physics"
] |
8,674 | https://en.wikipedia.org/wiki/DECT | Digital Enhanced Cordless Telecommunications (DECT) is a cordless telephony standard maintained by ETSI. It originated in Europe, where it is the common standard, replacing earlier standards, such as CT1 and CT2. Since the DECT-2020 standard onwards, it also includes IoT communication.
Beyond Europe, it has been adopted by Australia and most countries in Asia and South America. North American adoption was delayed by United States radio-frequency regulations. This forced development of a variation of DECT called DECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America.
DECT was originally intended for fast roaming between networked base stations, and the first DECT product was Net3 wireless LAN. However, its most popular application is single-cell cordless phones connected to traditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in many private branch exchange (PBX) systems for medium and large businesses, produced by Panasonic, Mitel, Gigaset, Ascom, Cisco, Grandstream, Snom, Spectralink, and RTX. DECT can also be used for purposes other than cordless phones, such as baby monitors, wireless microphones and industrial sensors. The ULE Alliance's DECT ULE and its "HAN FUN" protocol are variants tailored for home security, automation, and the internet of things (IoT).
The DECT standard includes the generic access profile (GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum.
The New Generation DECT (NG-DECT) standard, marketed as CAT-iq by the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability across IP-DECT base stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support for wideband audio.
DECT-2020 New Radio, marketed as NR+ (New Radio plus), is a 5G data transmission protocol which meets ITU-R IMT-2020 requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices.
Standards history
The DECT standard was developed by ETSI in several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard. Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing.
Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by the ITU as fulfilling the IMT-2000 requirements and thus qualifies as a 3G system. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT).
DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries in Europe. Outside Europe, it is used in most of Asia, Australia and South America. In the United States, the Federal Communications Commission in 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9 GHz), known as Unlicensed Personal Communications Services (UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such as baby monitors and wireless networks.
The New Generation DECT (NG-DECT) standard was first published in 2007; it was developed by ETSI with guidance from the Home Gateway Initiative through the DECT Forum to support IP-DECT functions in home gateway/IP-PBX equipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570. The DECT Forum maintains the CAT-iq trademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527.
The DECT Ultra Low Energy (DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year by Dialog Semiconductor. The standard was created to enable home automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference than Zigbee, Bluetooth, or Wi-Fi from microwave ovens, which all operate in the unlicensed 2.4 GHz ISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit.
A new low-complexity audio codec, LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications such as wireless speakers, headphones, headsets, and microphones. LC3plus supports scalable 16-bit narrowband, wideband, super wideband, fullband, and 24-bit high-resolution fullband and ultra-band coding, with sample rates of 8, 16, 24, 32, 48 and 96 kHz and audio bandwidth of up to 48 kHz.
DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based on cyclic prefix orthogonal frequency-division multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate with QAM-1024 modulation. The updated standard supports multi-antenna MIMO and beamforming, FEC channel coding, and hybrid automatic repeat request. There are 17 radio channel frequencies in the range from 450MHz up to 5,875MHz, and channel bandwidths of 1,728, 3,456, or 6,912kHz. Direct communication between end devices is possible with a mesh network topology. In October 2021, DECT-2020 NR was approved for the IMT-2020 standard, for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professional wireless audio applications with point-to-point or multicast communications; the proposal was fast-tracked by ITU-R following real-world evaluations.<ref
name=etsi-tr-103810/> The new protocol will be marketed as NR+ (New Radio plus) by the DECT Forum. OFDMA and SC-FDMA modulations were also considered by the ESTI DECT committee.
OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware from Dialog Semiconductor and DSP Group; the project is maintained by the DECT forum.
Application
The DECT standard originally envisaged three major areas of application:
Domestic cordless telephony, using a single base station to connect one or more handsets to the public telecommunications network.
Enterprise premises cordless PABXs and wireless LANs, using many base stations for coverage. Calls continue as users move between different coverage cells, through a mechanism called handover. Calls can be both within the system and to the public telecommunications network.
Public access, using large numbers of base stations to provide high capacity building or urban area coverage as part of a public telecommunications network.
Wireless microphone systems, for Speech optimized applications with Automatic frequency and interference management.
Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprise PABX market, albeit much smaller than the cordless home market, has been very successful as well, and all the major PABX vendors have advanced DECT access options available. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998 Telecom Italia launched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy. The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001.
DECT has been used for wireless local loop as a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over . One example is the corDECT standard.
The first data application for DECT was Net3 wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3 was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates.
Data applications such as electronic cash terminals, traffic lights, and remote door openers also exist, but have been eclipsed by Wi-Fi, 3G and 4G which compete with DECT for both voice and data.
Characteristics
The DECT standard specifies a means for a portable phone or "Portable Part" to access a fixed telephone network via radio. Base station or "Fixed Part" is used to terminate the radio link and provide access to a fixed line. A gateway is then used to connect calls to the fixed network, such as public switched telephone network (telephone jack), office PBX, ISDN, or VoIP over Ethernet connection.
Typical abilities of a domestic DECT Generic Access Profile (GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone jack. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used as intercoms, communicating between each other, and sometimes as walkie-talkies, intercommunicating without telephone line connection.
DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz.
DECT operates as a multicarrier frequency-division multiple access (FDMA) and time-division multiple access (TDMA) system. This means that the radio spectrum is divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT uses time-division duplex (TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s.
DECT also provides frequency-hopping spread spectrum over TDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion.
DECT allows interference-free wireless operation to around outdoors. Indoor performance is reduced when interior spaces are constrained by walls.
DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems, Wi-Fi networks, video senders, Bluetooth technology, baby monitors and other wireless devices.
Technical properties
ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties:
Audio codec:
mandatory:
32kbit/s G.726 ADPCM (narrow band),
64kbit/s G.722 sub-band ADPCM (wideband)
optional:
64kbit/s G.711 μ-law/A-law PCM (narrow band),
32kbit/s G.729.1 (wideband),
32kbit/s MPEG-4 ER AAC-LD (wideband),
64kbit/s MPEG-4 ER AAC-LD (super-wideband)
Frequency: the DECT physical layer specifies RF carriers for the frequency ranges 1880 MHz to 1980 MHz and 2010 MHz to 2025 MHz, as well as 902 MHz to 928 MHz and 2400 MHz to 2483,5 MHz ISM band with frequency-hopping for the U.S. market. The most common spectrum allocation is 1880 MHz to 1900 MHz; outside Europe, 1900 MHz to 1920 MHz and 1910 MHz to 1930 MHz spectrum is available in several countries.
in Europe, as well as South Africa, Asia, Hong Kong, Australia, and New Zealand
in Korea
in Taiwan
(J-DECT) in Japan
in China (until 2003)
in Brazil
in Latin America
(DECT 6.0) in the United States and Canada
Carriers (1.728 MHz spacing):
10 channels in Europe and Latin America
8 channels in Taiwan
5 channels in the US, Brazil, Japan
3 channels in Korea
Time slots: 2 × 12 (up and down stream)
Channel allocation: dynamic
Average transmission power: 10 mW (250 mW peak) in Europe & Japan, 4 mW (100 mW peak) in the US
Physical layer
The DECT physical layer uses FDMA/TDMA access with TDD.
Gaussian frequency-shift keying (GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 6 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s.
DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list.
The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed as effective radiated power (ERP), rather than the more commonly used equivalent isotropically radiated power (EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges.
Data link control layer
The DECT media access control layer controls the physical layer and provides connection oriented, connectionless and broadcast services to the higher layers.
The DECT data link layer uses Link Access Protocol Control (LAPC), a specially designed variant of the ISDN data link protocol called LAPD. They are based on HDLC.
GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP).
There are several combinations of slots and corresponding types of physical packets with GFSK modulation:
Basic packet (P32) 420 or 424 bits "full slot", used for normal speech transmission. User data (B-field) contains 320 bits.
Low-capacity packet (P00) 96 bits at the beginning of the time slot ("short slot"). This packet only contains 64-bit header (A-field) used as a dummy bearer to broadcast base station identification when idle.
Variable capacity packet (P00j) 100 + j or 104 + j bits, either two half-slots (0 ≤ j ≤ 136) or "long slot" (137 ≤ j ≤ 856). User data (B-field) contains j bits.
P64 (j = 640), P67 (j = 672) "long slot", used by NG-DECT/CAT-iq wideband voice and data.
High-capacity packet (P80) 900 or 904 bits, "double slot". This packet uses two time slots and always begins in an even time slot. The B-field is increased to 800 bits..
The 420/424 bits of a GFSK basic packet (P32) contain the following fields:
32 bits synchronization code (S-field): constant bit string AAAAE98AH for FP transmission, 55551675H for PP transmission
388 bits data (D-field), including
64 bits header (A-field): control traffic in logical channels C, M, N, P, and Q
320 bits user data (B-field): DECT payload, i.e. voice data
4 bits error-checking (X-field): CRC of the B-field
4 bits collision detection/channel quality (Z-field): optional, contains a copy of the X-field
The resulting full data rate is 32 kbit/s, available in both directions.
Network layer
The DECT network layer always contains the following protocol entities:
Call Control (CC)
Mobility Management (MM)
Optionally it may also contain others:
Call Independent Supplementary Services (CISS)
Connection Oriented Message Service (COMS)
Connectionless Message Service (CLMS)
All these communicate through a Link Control Entity (LCE).
The call control protocol is derived from ISDN DSS1, which is a Q.931-derived protocol. Many DECT-specific changes have been made.
The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT.
Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability.
DECT GAP is an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions.
Security
The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number.
The standard also provides encryption services with the DECT Standard Cipher (DSC). The encryption is fairly weak, using a 35-bit initialization vector and encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under a non-disclosure agreement to the phones' manufacturers from ETSI.
The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time. Unfortunately very few DECT devices on the market implemented authentication and encryption procedures and even when encryption was used by the phone, it was possible to implement a man-in-the-middle attack impersonating a DECT base station and revert to unencrypted mode which allows calls to be listened to, recorded, and re-routed to a different destination.
After an unverified report of a successful attack in 2002, members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008, and as of 2010 there has been a viable attack on it that can recover the key.
In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based on AES 128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite.
DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication.
Profiles
Various access profiles have been defined in the DECT standard:
Public Access Profile (PAP) (deprecated)
Generic Access Profile (GAP) ETSI EN 300 444
Cordless Terminal Mobility (CTM) Access Profile (CAP) ETSI EN 300 824
Data access profiles
DECT Packet Radio System (DPRS) ETSI EN 301 649
DECT Multimedia Access Profile (DMAP)
DECT Evolution and Audio Solution (DA14495)
Multimedia in the Local Loop Access Profile (MRAP)
Open Data Access Profile (ODAP)
Radio in the Local Loop (RLL) Access Profile (RAP) ETSI ETS 300 765
Interworking profiles (IWP)
DECT/ISDN Interworking Profile (IIP) ETSI EN 300 434
DECT/GSM Interworking Profile (GIP) ETSI EN 301 242
DECT/UMTS Interworking Profile (UIP) ETSI TS 101 863
Additional specifications
DECT 6.0
DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM.
In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since the UPCS band (1920–1930 MHz) is not free from heavy interference. Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor.
Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed 2.4 GHz and 900 MHz Region 2 ISM bands; some users of Uniden WDECT 2.4 GHz phones reported interoperability issues with Wi-Fi equipment.
North-American products may not be used in Europe, Pakistan, Sri Lanka, and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities, PTA, Telecommunications Regulatory Commission of Sri Lanka and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by the Federal Communications Commission and Innovation, Science and Economic Development Canada.
DECT 8.0 HD is a marketing designation for North American DECT devices certified with CAT-iq 2.0 "Multi Line" profile.
NG-DECT/CAT-iq
Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI.
NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitate VoIP calls through SIP and H.323 protocols.
There are several CAT-iq profiles which define supported voice features:
CAT-iq 1.0 "HD Voice" (ETSI TS 102 527-1): wideband audio, calling party line and name identification (CLIP/CNAP)
CAT-iq 2.0 "Multi Line" (ETSI TS 102 527-3): multiple lines, line name, call waiting, call transfer, phonebook, call list, DTMF tones, headset, settings
CAT-iq 2.1 "Green" (ETSI TS 102 527-5): 3-party conference, call intrusion, caller blocking (CLIR), answering machine control, SMS, power-management
CAT-iq Data light data services, software upgrade over the air (SUOTA) (ETSI TS 102 527-4)
CAT-iq IOT Smart Home connectivity (IOT) with DECT Ultra Low Energy (ETSI TS 102 939)
CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to support IP-DECT base stations found in office IP-PBX and home gateways.
DECT-2020
DECT-2020, also called NR+, is a new radio standard by ETSI for the DECT bands worldwide. The standard was designed to meet a subset of the ITU IMT-2020 5G requirements that are applicable to IOT and Industrial internet of things. DECT-2020 is compliant with the requirements for Ultra Reliable Low Latency Communications URLLC and massive Machine Type Communication (mMTC) of IMT-2020.
DECT-2020 NR has new capabilities compared to DECT and DECT Evolution:
Better multipath operation (OFDM Cyclic Prefix)
Better radio sensitivity (OFDM and Turbocodes)
Better resistance to radio interference (co-channel interference rejection)
Better bandwidth utilization
Mesh deployment
The DECT-2020 standard has been designed to co-exist in the DECT radio band with existing DECT deployments. It uses the same Time Division slot timing and Frequency Division center frequencies and uses pre-transmit scanning to minimize co-channel interference.
DECT for data networks
Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to indoors and using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative to Wi-Fi. The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti's Net3, was a wireless LAN, and German firms Dosch & Amand and Hoeft & Wessel built niche businesses on the supply of data transmission systems based on DECT.
However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed.
Health and safety
DECT uses UHF radio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies.
In North America, the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe.
The UK Health Protection Agency (HPA) claims that due to a mobile phone's adaptive power ability, a European DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A European DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.
Most studies have been unable to demonstrate any link to health effects, or have been inconclusive. Electromagnetic fields may have an effect on protein expression in laboratory settings but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.
See also
GSM Interworking Profile (GIP)
IP-DECT
CT2 (DECT's predecessor in Europe)
Net3
CorDECT
WDECT
Unlicensed Personal Communications Services
Microcell
Wireless local loop
References
Footnotes
Standards
ETSI EN 300 175 V2.9.1 (2022-03). Digital Enhanced Cordless Telecommunications (DECT) Common Interface (CI)
ETSI TS 103 636 v1.5.1 (2024-03). DECT-2020 New Radio (NR)
Digital Enhanced Cordless Telecommunications (DECT)
Further reading
Technical Report: Multicell Networks based on DECT and CAT-iq . Dosch & Amand Research
External links
DECT Forum at dect.org
DECT information at ETSI
DECTWeb.com
Open source implementation of a DECT stack
Broadband
ETSI
Local loop
Mobile telecommunications standards
Software-defined radio
Wireless communication systems
DECT | DECT | [
"Technology",
"Engineering"
] | 6,775 | [
"Radio electronics",
"Mobile telecommunications standards",
"Mobile telecommunications",
"Wireless communication systems",
"DECT",
"Software-defined radio"
] |
8,681 | https://en.wikipedia.org/wiki/Data%20compression%20ratio | Data compression ratio, also known as compression power, is a measurement of the relative reduction in size of data representation produced by a data compression algorithm. It is typically expressed as the division of uncompressed size by compressed size.
Definition
Data compression ratio is defined as the ratio between the uncompressed size and compressed size:
Thus, a representation that compresses a file's storage size from 10 MB to 2 MB has a compression ratio of 10/2 = 5, often notated as an explicit ratio, 5:1 (read "five" to "one"), or as an implicit ratio, 5/1. This formulation applies equally for compression, where the uncompressed size is that of the original; and for decompression, where the uncompressed size is that of the reproduction.
Sometimes the space saving is given instead, which is defined as the reduction in size relative to the uncompressed size:
Thus, a representation that compresses the storage size of a file from 10 MB to 2 MB yields a space saving of 1 - 2/10 = 0.8, often notated as a percentage, 80%.
For signals of indefinite size, such as streaming audio and video, the compression ratio is defined in terms of uncompressed and compressed data rates instead of data sizes:
and instead of space saving, one speaks of data-rate saving, which is defined as the data-rate reduction relative to the uncompressed data rate:
For example, uncompressed songs in CD format have a data rate of 16 bits/channel x 2 channels x 44.1 kHz ≅ 1.4 Mbit/s, whereas AAC files on an iPod are typically compressed to 128 kbit/s, yielding a compression ratio of 10.9, for a data-rate saving of 0.91, or 91%.
When the uncompressed data rate is known, the compression ratio can be inferred from the compressed data rate.
Lossless vs. Lossy
Lossless compression of digitized data such as video, digitized film, and audio preserves all the information, but it does not generally achieve compression ratio much better than 2:1 because of the intrinsic entropy of the data. Compression algorithms which provide higher ratios either incur very large overheads or work only for specific data sequences (e.g. compressing a file with mostly zeros). In contrast, lossy compression (e.g. JPEG for images, or MP3 and Opus for audio) can achieve much higher compression ratios at the cost of a decrease in quality, such as Bluetooth audio streaming, as visual or audio compression artifacts from loss of important information are introduced. A compression ratio of at least 50:1 is needed to get 1080i video into a 20 Mbit/s MPEG transport stream.
Uses
The data compression ratio can serve as a measure of the complexity of a data set or signal. In particular it is used to approximate the algorithmic complexity. It is also used to see how much of a file is able to be compressed without increasing its original size.
References
External links
Nondegrading lossy compression
Data compression
Engineering ratios | Data compression ratio | [
"Mathematics",
"Engineering"
] | 644 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
8,697 | https://en.wikipedia.org/wiki/DNA%20ligase | DNA ligase is a type of enzyme that facilitates the joining of DNA strands together by catalyzing the formation of a phosphodiester bond. It plays a role in repairing single-strand breaks in duplex DNA in living organisms, but some forms (such as DNA ligase IV) may specifically repair double-strand breaks (i.e. a break in both complementary strands of DNA). Single-strand breaks are repaired by DNA ligase using the complementary strand of the double helix as a template, with DNA ligase creating the final phosphodiester bond to fully repair the DNA.
DNA ligase is used in both DNA repair and DNA replication (see Mammalian ligases). In addition, DNA ligase has extensive use in molecular biology laboratories for recombinant DNA experiments (see Research applications). Purified DNA ligase is used in gene cloning to join DNA molecules together to form recombinant DNA.
Enzymatic mechanism
The mechanism of DNA ligase is to form two covalent phosphodiester bonds between 3' hydroxyl ends of one nucleotide ("acceptor"), with the 5' phosphate end of another ("donor"). Two ATP molecules are consumed for each phosphodiester bond formed. AMP is required for the ligase reaction, which proceeds in four steps:
Reorganization of activity site such as nicks in DNA segments or Okazaki fragments etc.
Adenylylation (addition of AMP) of a lysine residue in the active center of the enzyme, pyrophosphate is released;
Transfer of the AMP to the 5' phosphate of the so-called donor, formation of a pyrophosphate bond;
Formation of a phosphodiester bond between the 5' phosphate of the donor and the 3' hydroxyl of the acceptor.
Ligase will also work with blunt ends, although higher enzyme concentrations and different reaction conditions are required.
Types
E. coli
The E. coli DNA ligase is encoded by the lig gene. DNA ligase in E. coli, as well as most prokaryotes, uses energy gained by cleaving nicotinamide adenine dinucleotide (NAD) to create the phosphodiester bond. It does not ligate blunt-ended DNA except under conditions of molecular crowding with polyethylene glycol, and cannot join RNA to DNA efficiently.
The activity of E. coli DNA ligase can be enhanced by DNA polymerase at the right concentrations. Enhancement only works when the concentrations of the DNA polymerase 1 are much lower than the DNA fragments to be ligated. When the concentrations of Pol I DNA polymerases are higher, it has an adverse effect on E. coli DNA ligase
T4
The DNA ligase from bacteriophage T4 (a bacteriophage that infects Escherichia coli bacteria). The T4 ligase is the most-commonly used in laboratory research. It can ligate either cohesive or blunt ends of DNA, oligonucleotides, as well as RNA and RNA-DNA hybrids, but not single-stranded nucleic acids. It can also ligate blunt-ended DNA with much greater efficiency than E. coli DNA ligase. Unlike E. coli DNA ligase, T4 DNA ligase cannot utilize NAD and it has an absolute requirement for ATP as a cofactor. Some engineering has been done to improve the in vitro activity of T4 DNA ligase; one successful approach, for example, tested T4 DNA ligase fused to several alternative DNA binding proteins and found that the constructs with either p50 or NF-kB as fusion partners were over 160% more active in blunt-end ligations for cloning purposes than wild type T4 DNA ligase. A typical reaction for inserting a fragment into a plasmid vector would use about 0.01 (sticky ends) to 1 (blunt ends) units of ligase. The optimal incubation temperature for T4 DNA ligase is 16 °C.
Bacteriophage T4 ligase mutants have increased sensitivity to both UV irradiation and the alkylating agent methyl methanesulfonate indicating that DNA ligase is employed in the repair of the DNA damages caused by these agents.
Mammalian
In mammals, there are four specific types of ligase.
DNA ligase 1: ligates the nascent DNA of the lagging strand after the Ribonuclease H has removed the RNA primer from the Okazaki fragments.
DNA ligase 3: complexes with DNA repair protein XRCC1 to aid in sealing DNA during the process of nucleotide excision repair and recombinant fragments. Of the all known mammalian DNA ligases, only ligase 3 has been found to be present in mitochondria.
DNA ligase 4: complexes with XRCC4. It catalyzes the final step in the non-homologous end joining DNA double-strand break repair pathway. It is also required for V(D)J recombination, the process that generates diversity in immunoglobulin and T-cell receptor loci during immune system development.
DNA ligase 2: A purification artifact resulting from proteolytic degradation of DNA ligase 3. Initially, it has been recognized as another DNA ligase and it is the reason for the unusual nomenclature of DNA ligases.
DNA ligase from eukaryotes and some microbes uses adenosine triphosphate (ATP) rather than NAD.
Thermostable
Derived from a thermophilic bacterium, the enzyme is stable and active at much higher temperatures than conventional DNA ligases. Its half-life is 48 hours at 65 °C and greater than 1 hour at 95 °C. Ampligase DNA Ligase has been shown to be active for at least 500 thermal cycles (94 °C/80 °C) or 16 hours of cycling.10 This exceptional thermostability permits extremely high hybridization stringency and ligation specificity.
Measurement of activity
There are at least three different units used to measure the activity of DNA ligase:
Weiss unit - the amount of ligase that catalyzes the exchange of 1 nmole of 32P from inorganic pyrophosphate to ATP in 20 minutes at 37°C. This is the one most commonly used.
Modrich-Lehman unit - this is rarely used, and one unit is defined as the amount of enzyme required to convert 100 nmoles of d(A-T)n to an exonuclease-III resistant form in 30 minutes under standard conditions.
Many commercial suppliers of ligases use an arbitrary unit based on the ability of ligase to ligate cohesive ends. These units are often more subjective than quantitative and lack precision.
Research applications
DNA ligases have become indispensable tools in modern molecular biology research for generating recombinant DNA sequences. For example, DNA ligases are used with restriction enzymes to insert DNA fragments, often genes, into plasmids.
Controlling the optimal temperature is a vital aspect of performing efficient recombination experiments involving the ligation of cohesive-ended fragments. Most experiments use T4 DNA Ligase (isolated from bacteriophage T4), which is most active at 37 °C. However, for optimal ligation efficiency with cohesive-ended fragments ("sticky ends"), the optimal enzyme temperature needs to be balanced with the melting temperature Tm of the sticky ends being ligated, the homologous pairing of the sticky ends will not be stable because the high temperature disrupts hydrogen bonding. A ligation reaction is most efficient when the sticky ends are already stably annealed, and disruption of the annealing ends would therefore result in low ligation efficiency. The shorter the overhang, the lower the Tm.
Since blunt-ended DNA fragments have no cohesive ends to anneal, the melting temperature is not a factor to consider within the normal temperature range of the ligation reaction. The limiting factor in blunt end ligation is not the activity of the ligase but rather the number of alignments between DNA fragment ends that occur. The most efficient ligation temperature for blunt-ended DNA would therefore be the temperature at which the greatest number of alignments can occur. The majority of blunt-ended ligations are carried out at 14-25 °C overnight. The absence of stably annealed ends also means that the ligation efficiency is lowered, requiring a higher ligase concentration to be used.
A novel use of DNA ligase can be seen in the field of nano chemistry, specifically in DNA origami. DNA based self-assembly principles have proven useful for organizing nanoscale objects, such as biomolecules, nanomachines, nanoelectronic and photonic component. Assembly of such nano structure requires the creation of an intricate mesh of DNA molecules. Although DNA self-assembly is possible without any outside help using different substrates such as provision of catatonic surface of Aluminium foil, DNA ligase can provide the enzymatic assistance that is required to make DNA lattice structure from DNA over hangs.
History
The first DNA ligase was purified and characterized in 1967 by the Gellert, Lehman, Richardson, and Hurwitz laboratories. It was first purified and characterized by Weiss and Richardson using a six-step chromatographic-fractionation process beginning with elimination of cell debris and addition of streptomycin, followed by several Diethylaminoethyl (DEAE)-cellulose column washes and a final phosphocellulose fractionation. The final extract contained 10% of the activity initially recorded in the E. coli media; along the process it was discovered that ATP and Mg++ were necessary to optimize the reaction. The common commercially available DNA ligases were originally discovered in bacteriophage T4, E. coli and other bacteria.
Disorders
Genetic deficiencies in human DNA ligases have been associated with clinical syndromes marked by immunodeficiency, radiation sensitivity, and developmental abnormalities, LIG4 syndrome (Ligase IV syndrome) is a rare disease associated with mutations in DNA ligase 4 and interferes with dsDNA break-repair mechanisms. Ligase IV syndrome causes immunodeficiency in individuals and is commonly associated with microcephaly and marrow hypoplasia. A list of prevalent diseases caused by lack of or malfunctioning of DNA ligase is as follows.
Xeroderma pigmentosum
Xeroderma pigmentosum, which is commonly known as XP, is an inherited condition characterized by an extreme sensitivity to ultraviolet (UV) rays from sunlight. This condition mostly affects the eyes and areas of skin exposed to the sun. Some affected individuals also have problems involving the nervous system.
Ataxia-telangiectasia
Mutations in the ATM gene cause ataxia–telangiectasia. The ATM gene provides instructions for making a protein that helps control cell division and is involved in DNA repair. This protein plays an important role in the normal development and activity of several body systems, including the nervous system and immune system. The ATM protein assists cells in recognizing damaged or broken DNA strands and coordinates DNA repair by activating enzymes that fix the broken strands. Efficient repair of damaged DNA strands helps maintain the stability of the cell's genetic information. Affected children typically develop difficulty walking, problems with balance and hand coordination, involuntary jerking movements (chorea), muscle twitches (myoclonus), and disturbances in nerve function (neuropathy). The movement problems typically cause people to require wheelchair assistance by adolescence. People with this disorder also have slurred speech and trouble moving their eyes to look side-to-side (oculomotor apraxia).
Fanconi Anemia
Fanconi anemia (FA) is a rare, inherited blood disorder that leads to bone marrow failure. FA prevents bone marrow from making enough new blood cells for the body to work normally. FA also can cause the bone marrow to make many faulty blood cells. This can lead to serious health problems, such as leukemia.
Bloom syndrome
Bloom syndrome results in skin that is sensitive to sun exposure, and usually the development of a butterfly-shaped patch of reddened skin across the nose and cheeks. A skin rash can also appear on other areas that are typically exposed to the sun, such as the back of the hands and the forearms. Small clusters of enlarged blood vessels (telangiectases) often appear in the rash; telangiectases can also occur in the eyes. Other skin features include patches of skin that are lighter or darker than the surrounding areas (hypopigmentation or hyperpigmentation respectively). These patches appear on areas of the skin that are not exposed to the sun, and their development is not related to the rashes.
As a drug target
In recent studies, human DNA ligase I was used in Computer-aided drug design to identify DNA ligase inhibitors as possible therapeutic agents to treat cancer. Since excessive cell growth is a hallmark of cancer development, targeted chemotherapy that disrupts the functioning of DNA ligase can impede adjuvant cancer forms. Furthermore, it has been shown that DNA ligases can be broadly divided into two categories, namely, ATP- and NAD+-dependent. Previous research has shown that although NAD+-dependent DNA ligases have been discovered in sporadic cellular or viral niches outside the bacterial domain of life, there is no instance in which a NAD+-dependent ligase is present in a eukaryotic organism. The presence solely in non-eukaryotic organisms, unique substrate specificity, and distinctive domain structure of NAD+ dependent compared with ATP-dependent human DNA ligases together make NAD+-dependent ligases ideal targets for the development of new antibacterial drugs.
See also
DNA end
Lagging strand
DNA replication
Okazaki fragment
DNA polymerase
Sequencing by ligation
References
External links
DNA Ligase: PDB molecule of the month
Davidson College General Information on Ligase
OpenWetWare DNA Ligation Protocol
EC 6.5
Biotechnology
DNA replication
Enzymes
Genetics techniques | DNA ligase | [
"Engineering",
"Biology"
] | 2,918 | [
"Genetics techniques",
"Genetic engineering",
"Biotechnology",
"DNA replication",
"Molecular genetics",
"nan"
] |
8,703 | https://en.wikipedia.org/wiki/Darwin%20Awards | The Darwin Awards are a rhetorical tongue-in-cheek honor that originated in Usenet newsgroup discussions around 1985. They recognize individuals who have supposedly contributed to human evolution by selecting themselves out of the gene pool by dying or becoming sterilized by their own actions.
The project became more formalized with the creation of a website in 1993, followed by a series of books starting in 2000 by Wendy Northcutt. The criterion for the awards states: "In the spirit of Charles Darwin, the Darwin Awards commemorate individuals who protect our gene pool by making the ultimate sacrifice of their own lives. Darwin Award winners eliminate themselves in an extraordinarily idiotic manner, thereby improving our species' chances of long-term survival."
Accidental self-sterilization also qualifies, but the site notes: "Of necessity, the award is usually bestowed posthumously." The candidate is disqualified, though, if "innocent bystanders" are killed in the process, as they might have contributed positively to the gene pool. The logical problem presented by award winners who may have already reproduced is not addressed in the selection process owing to the difficulty of ascertaining whether or not a person has children; the Darwin Award rules state that the presence of offspring does not disqualify a nominee.
History
The origin of the Darwin Awards can be traced back to posts on Usenet group discussions as early as 1985. A post on August 7, 1985, describes the awards as being "given posthumously to people who have made the supreme sacrifice to keep their genes out of our pool. Style counts, not everyone who dies from their own stupidity can win." This early post cites an example of a person who tried to break into a vending machine and was crushed to death when he pulled it over himself. Another widely distributed early story mentioning the Darwin Awards is the JATO Rocket Car, which describes a man who strapped a jet-assisted take-off unit to his Chevrolet Impala in the Arizona desert and who died on the side of a cliff as his car achieved speeds of . This story was later determined to be an urban legend by the Arizona Department of Public Safety. Wendy Northcutt says the official Darwin Awards website run by Northcutt does its best to confirm all stories submitted, listing them as, "confirmed true by Darwin". Many of the viral emails circulating the Internet, however, are hoaxes and urban legends.
The website and collection of books were started in 1993 by Wendy Northcutt, who at the time was a graduate in molecular biology from the University of California, Berkeley. She went on to study neurobiology at Stanford University, doing research on cancer and telomerase. In her spare time, she organised chain letters from family members into the original Darwin Awards website hosted in her personal account space at Stanford. She eventually left the bench in 1998 and devoted herself full-time to her website and books in September 1999. By 2002, the website received 7 million page hits per month.
Northcutt encountered some difficulty in publishing the first book, since most publishers would only offer her a deal if she agreed to remove the stories from the Internet, but she refused: "It was a community! I could not do that. Even though it might have cost me a lot of money, I kept saying no." She eventually found a publisher who agreed to print a book containing only 10% of the material gathered for the website. The first book turned out to be a success, and was listed on The New York Times best-seller list for 6 months.
Not all of the feedback from the stories Northcutt published was positive, and she occasionally received email from people who knew the deceased. One such person advised: "This is horrible. It has shocked our community to the core. You should remove this." Northcutt demurred: "I can't. It's just too stupid." Northcutt kept the stories on the website and in her books, citing them as a "funny-but-true safety guide", and mentioning that children who read the book are going to be much more careful around explosives.
The website also awards Honorable Mentions to individuals who survive their misadventures with their reproductive capacity intact. One example of this is Larry Walters, who attached helium-filled weather balloons to a lawn chair and floated far above Long Beach, California, in July 1982. He reached an altitude of , but survived, to be later fined for crossing controlled airspace. (Walters later fell into depression and died by suicide.) Another notable honorable mention was given to the two men who attempted to burgle the home of footballer Duncan Ferguson (who had an infamous reputation for physical aggression on and off the pitch, including four convictions for assault and who had served six months in Glasgow's Barlinnie Prison) in 2001, with one burglar requiring three days' hospitalisation after being confronted by the player.
A 2014 study published in the British Medical Journal found that between 1995 and 2014, males represented 88.7% of Darwin Award winners (see figure).
The comedy film The Darwin Awards (2006), written and directed by Finn Taylor, was based on the website and many of the Darwin Awards stories.
Rules
Northcutt has stated five requirements for a Darwin Award: Two of them are that the event must be verified to have happened, and that the nominee themselves were responsible for the activity. The others are:
Nominee must be dead or rendered sterile
This may be subject to dispute. Potential awardees may be out of the gene pool because of age; others have already reproduced before their deaths. To avoid debates about the possibility of in vitro fertilization, artificial insemination, or cloning, the original Darwin Awards book applied the following "deserted island" test to potential winners: If the person were unable to reproduce when stranded on a deserted island with a fertile member of the opposite sex, he or she would be considered sterile. Winners of the award, in general, either are dead or have become unable to use their sexual organs.
Astoundingly stupid judgment
The candidate's foolishness must be unique and sensational, likely because the award is intended to be funny. A number of foolish but common activities, such as smoking in bed, are excluded from consideration. In contrast, self-immolation caused by smoking after being administered a flammable ointment in a hospital and specifically told not to smoke is grounds for nomination. One "Honorable Mention" (a man who attempted suicide by swallowing nitroglycerin pills, and then tried to detonate them by running into a wall) is noted to be in this category, despite being intentional and self-inflicted (i.e. attempted suicide), which would normally disqualify the inductee.
Capable of sound judgment
In 2011, however, the awards targeted a 16-year-old boy in Leeds who died stealing copper wiring (he was underage at the time of his death; the standard minimum driving age in Great Britain being 17). In 2012, Northcutt made similar light of a 14-year-old girl in Brazil who was killed while leaning out of a school bus window, but she was "disqualified" for the award itself because of the likely public objection owing to the girl's age, which Northcutt asserts is based on "magical thinking".
Under this rule, and for reasons of good taste, individuals whose misfortune was caused by mental impairment or disability are not eligible for a Darwin Award, primarily to avoid mocking or making light of the disabled, and to ensure that the awards do not celebrate or trivialize tragedies involving vulnerable individuals.
Reception
The Darwin Awards have received varying levels of scrutiny from the scientific community. In his book Encyclopedia of Evolution, biology professor Stanley A. Rice comments: "Despite the tremendous value of these stories as entertainment, it is unlikely that they represent evolution in action", citing the nonexistence of "judgment impairment genes". On an essay in the book The Evolution of Evil, professor Nathan Hallanger acknowledges that the Darwin Awards are meant as black humor, but associates them with the eugenics movement of the early 20th century. University of Oxford biophysicist Sylvia McLain, writing for The Guardian, says that while the Darwin Awards are "clearly meant to be funny", they do not accurately represent how genetics work, further noting that "'smart' people do stupid things all the time". Geologist and science communicator Sharon A. Hill has criticized the Darwin Awards on both scientific and ethical grounds, claiming that no genetic traits impact personal intelligence or good judgment to be targeted by natural selection, and calling them an example of "ignorance" and "heartlessness".
Notable recipients
The driver of the JATO Rocket Car in the well-known urban legend.
Garry Hoy who fell from the 24th story of the Toronto-Dominion Centre whilst attempting to demonstrate to a group of students that the windows were unbreakable. His death has been featured in television programs such as 1000 Ways to Die and MythBusters.
Charles Stephens, the first person to die while attempting to go over Niagara Falls in a barrel.
Larry Walters was awarded an 'Honorable Mention' for his lawn chair balloon flight into controlled airspace.
John Allen Chau who, supposedly on his own behalf, tried to convert an isolated indigenous group on North Sentinel Island to Christianity, and was killed by them.
Books
See also
List of inventors killed by their own inventions
List of selfie-related injuries and deaths
List of unusual deaths
Schadenfreude
Death by misadventure
Herman Cain Award, a similar ironic award
Ig Nobel Prize
References
External links
American comedy websites
Ironic and humorous awards
Incompetence
Black comedy
Internet properties established in 1993
Awards established in 1993
1993 establishments in the United States
Evolution | Darwin Awards | [
"Biology"
] | 1,999 | [
"Incompetence",
"Behavior",
"Human behavior"
] |
8,709 | https://en.wikipedia.org/wiki/Dhrystone | Dhrystone is a synthetic computing benchmark program developed in 1984 by Reinhold P. Weicker intended to be representative of system (integer) programming. The Dhrystone grew to become representative of general processor (CPU) performance. The name "Dhrystone" is a pun on a different benchmark algorithm called Whetstone, which emphasizes floating point performance.
With Dhrystone, Weicker gathered meta-data from a broad range of software, including programs written in FORTRAN, PL/1, SAL, ALGOL 68, and Pascal. He then characterized these programs in terms of various common constructs: procedure calls, pointer indirections, assignments, etc. From this he wrote the Dhrystone benchmark to correspond to a representative mix. Dhrystone was published in Ada, with the C version for Unix developed by Rick Richardson ("version 1.1") greatly contributing to its popularity.
Dhrystone vs. Whetstone
The Dhrystone benchmark contains no floating point operations, thus the name is a pun on the then-popular Whetstone benchmark for floating point operations. The output from the benchmark is the number of Dhrystones per second (the number of iterations of the main code loop per second).
Both Whetstone and Dhrystone are synthetic benchmarks, meaning that they are simple programs that are carefully designed to statistically mimic the processor usage of some common set of programs. Whetstone, developed in 1972, originally strove to mimic typical Algol 60 programs based on measurements from 1970, but eventually became most popular in its Fortran version, reflecting the highly numerical orientation of computing in the 1960s.
Issues addressed by Dhrystone
Dhrystone's eventual importance as an indicator of general-purpose ("integer") performance of new computers made it a target for commercial compiler writers. Various modern compiler static code analysis techniques (such as elimination of dead code: for example, code which uses the processor but produces internal results which are not used or output) make the use and design of synthetic benchmarks more difficult. Version 2.0 of the benchmark, released by Weicker and Richardson in March 1988, had a number of changes intended to foil a range of compiler techniques. Yet it was carefully crafted so as not to change the underlying benchmark. This effort to foil compilers was only partly successful. Dhrystone 2.1, released in May of the same year, had some minor changes and remains the current definition of Dhrystone.
Other than issues related to compiler optimization, various other issues have been cited with the Dhrystone. Most of these, including the small code size and small data set size, were understood at the time of its publication in 1984. More subtle is the slight over-representation of string operations, which is largely language-related: both Ada and Pascal have strings as normal variables in the language, whereas C does not, so what was simple variable assignment in reference benchmarks became buffer copy operations in the C library. Another issue is that the score reported does not include information which is critical when comparing systems such as which compiler was used, and what optimizations.
Dhrystone remains remarkably resilient as a simple benchmark, but its continuing value in establishing true performance is questionable. It is easy to use, well documented, fully self-contained, well understood, and can be made to work on almost any system. In particular, it has remained in broad use in the embedded computing world, though the recently developed EEMBC benchmark suite, the CoreMark standalone benchmark, HINT, Stream, and even Bytemark are widely quoted and used, as well as more specific benchmarks for the memory subsystem (Cachebench), TCP/IP (TTCP), and many others.
Results
Dhrystone may represent a result more meaningfully than MIPS (million instructions per second) because instruction count comparisons between different instruction sets (e.g. RISC vs. CISC) can confound simple comparisons. For example, the same high-level task may require many more instructions on a RISC machine, but might execute faster than a single CISC instruction. Thus, the Dhrystone score counts only the number of program iteration completions per second, allowing individual machines to perform this calculation in a machine-specific way. Another common representation of the Dhrystone benchmark is the DMIPS (Dhrystone MIPS) obtained when the Dhrystone score is divided by 1757 (the number of Dhrystones per second obtained on the VAX 11/780, nominally a 1 MIPS machine).
Another way to represent results is in DMIPS/MHz, where DMIPS result is further divided by CPU frequency, to allow for easier comparison of CPUs running at different clock rates.
Shortcomings
Using Dhrystone as a benchmark has pitfalls:
It features unusual code that is not usually representative of modern real-life programs.
It is susceptible to compiler optimizations. For example, it does a lot of string copying in an attempt to measure string copying performance. However, the strings in Dhrystone are of known constant length and their starts are aligned on natural boundaries, two characteristics usually absent from real programs. Therefore, an optimizer can replace a string copy with a sequence of word moves without any loops, which will be much faster. This optimization consequently overstates system performance, sometimes by more than 30%.
Dhrystone's small code size may fit in the instruction cache of a modern CPU, so that instruction fetch performance is not rigorously tested. Similarly, Dhrystone may also fit completely in the data cache, thus not exercising data cache miss performance. To counter fits-in-the-cache problem, the SPECint benchmark was created in 1988 to include a suite of (initially 8) much larger programs (including a compiler) which could not fit into L1 or L2 caches of that era.
See also
Standard Performance Evaluation Corporation (SPEC)
Geekbench
References
External links
Dhrystone Benchmark: Rationale for Version 2 and Measurement Rules (Reinhold P. Weicker, 1988)
DHRYSTONE Benchmark Program (Reinhold P. Weicker, 1995)
Benchmarks (computing)
Computer-related introductions in 1984 | Dhrystone | [
"Technology"
] | 1,288 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
8,724 | https://en.wikipedia.org/wiki/Doppler%20effect | The Doppler effect (also Doppler shift) is the change in the frequency of a wave in relation to an observer who is moving relative to the source of the wave. The Doppler effect is named after the physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession.
When the source of the sound wave is moving towards the observer, each successive cycle of the wave is emitted from a position closer to the observer than the previous cycle. Hence, from the observer's perspective, the time between cycles is reduced, meaning the frequency is increased. Conversely, if the source of the sound wave is moving away from the observer, each cycle of the wave is emitted from a position farther from the observer than the previous cycle, so the arrival time between successive cycles is increased, thus reducing the frequency.
For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect in such cases may therefore result from motion of the source, motion of the observer, motion of the medium, or any combination thereof. For waves propagating in vacuum, as is possible for electromagnetic waves or gravitational waves, only the difference in velocity between the observer and the source needs to be considered.
History
Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848).
General
In classical physics, where the speeds of source and the receiver relative to the medium are lower than the speed of waves in the medium, the relationship between observed frequency and emitted frequency is given by:
where
is the propagation speed of waves in the medium;
is the speed of the receiver relative to the medium. In the formula, is added to if the receiver is moving towards the source, subtracted if the receiver is moving away from the source;
is the speed of the source relative to the medium. is subtracted from if the source is moving towards the receiver, added if the source is moving away from the receiver.
Note this relationship predicts that the frequency will decrease if either source or receiver is moving away from the other.
Equivalently, under the assumption that the source is either directly approaching or receding from the observer:
where
is the wave's speed relative to the receiver;
is the wave's speed relative to the source;
is the wavelength.
If the source approaches the observer at an angle (but still with a constant speed), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual.
If the speeds and are small compared to the speed of the wave, the relationship between observed frequency and emitted frequency is approximately
where
is the opposite of the relative speed of the receiver with respect to the source: it is positive when the source and the receiver are moving towards each other.
Consequences
Assuming a stationary observer and a wave source moving towards the observer at (or exceeding) the speed of the wave, the Doppler equation predicts an infinite (or negative) frequency as from the observer's perspective. Thus, the Doppler equation is inapplicable for such cases. If the wave is a sound wave and the sound source is moving faster than the speed of sound, the resulting shock wave creates a sonic boom.
Lord Rayleigh predicted the following effect in his classic book on sound: if the observer were moving from the (stationary) source at twice the speed of sound, a musical piece previously emitted by that source would be heard in correct tempo and pitch, but as if played backwards.
Applications
Sirens
A siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus:
In other words, if the siren approached the observer directly, the pitch would remain constant, at a higher than stationary pitch, until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial speed does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity:
where is the angle between the object's forward velocity and the line of sight from the object to the observer.
Astronomy
The Doppler effect for electromagnetic waves such as light is of widespread use in astronomy to measure the speed at which stars and galaxies are approaching or receding from us, resulting in so called blueshift or redshift, respectively. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. This effect typically happens on a very small scale; there would not be a noticeable difference in visible light to the unaided eye.
The use of the Doppler effect in astronomy depends on knowledge of precise frequencies of discrete lines in the spectra of stars.
Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and −260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial speed means the star is receding from the Sun, negative that it is approaching.
The relationship between the expansion of the universe and the Doppler effect is not simple matter of the source moving away from the observer. In cosmology, the redshift of expansion is considered separate from redshifts due to gravity or Doppler motion.
Distant galaxies also exhibit peculiar motion distinct from their cosmological recession speeds. If redshifts are used to determine distances in accordance with Hubble's law, then these peculiar motions give rise to redshift-space distortions.
Radar
The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target – e.g. a motor car, as police use radar to detect speeding motorists – as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's speed. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc.
Because the Doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative speed is twice that from the same target emitting a wave:
Medical
An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives).
Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis.
Flow measurement
Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency.
Velocity profile measurement
Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive.
Satellites
Satellite navigation
The Doppler shift can be exploited for satellite navigation such as in Transit and DORIS.
Satellite communication
Doppler also needs to be compensated in satellite communication.
Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal. After realizing that the Doppler shift had not been considered before launch of the Huygens probe of the 2005 Cassini–Huygens mission, the probe trajectory was altered to approach Titan in such a way that its transmissions traveled perpendicular to its direction of motion relative to Cassini, greatly reducing the Doppler shift.
Doppler shift of the direct path can be estimated by the following formula:
where is the speed of the mobile station, is the wavelength of the carrier, is the elevation angle of the satellite and is the driving direction with respect to the satellite.
The additional Doppler shift due to the satellite moving can be described as:
where is the relative speed of the satellite.
Audio
The Leslie speaker, most commonly associated with and predominantly used with the famous Hammond organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note.
Vibration measurement
A laser Doppler vibrometer (LDV) is a non-contact instrument for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.
Robotics
Dynamic real-time path planning in robotics to aid the movement of robots in a sophisticated environment with moving obstacles often take help of Doppler effect. Such applications are specially used for competitive robotics where the environment is constantly changing, such as robosoccer.
Inverse Doppler effect
Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The size of the Doppler shift depends on the refractive index of the medium a wave is traveling through. Some materials are capable of negative refraction, which should lead to a Doppler shift that works in a direction opposite that of a conventional Doppler shift. The first experiment that detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003. Later, the inverse Doppler effect was observed in some inhomogeneous materials, and predicted inside a Vavilov–Cherenkov cone.
See also
Bistatic Doppler shift
Differential Doppler effect
Doppler cooling
Dopplergraph
Fading
Fizeau experiment
Photoacoustic Doppler effect
Range rate
Rayleigh fading
Redshift
Laser Doppler imaging
Relativistic Doppler effect
Primary sources
References
Further reading
Doppler, C. (1842). Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (About the coloured light of the binary stars and some other stars of the heavens). Publisher: Abhandlungen der Königl. Böhm. Gesellschaft der Wissenschaften (V. Folge, Bd. 2, S. 465–482) [Proceedings of the Royal Bohemian Society of Sciences (Part V, Vol 2)]; Prague: 1842 (Reissued 1903). Some sources mention 1843 as year of publication because in that year the article was published in the Proceedings of the Bohemian Society of Sciences. Doppler himself referred to the publication as "Prag 1842 bei Borrosch und André", because in 1842 he had a preliminary edition printed that he distributed independently.
"Doppler and the Doppler effect", E. N. da C. Andrade, Endeavour Vol. XVIII No. 69, January 1959 (published by ICI London). Historical account of Doppler's original paper and subsequent developments.
David Nolte (2020). The fall and rise of the Doppler effect. Physics Today, v. 73, pp. 31–35. DOI: 10.1063/PT.3.4429
External links
The Doppler effect – The Feynman Lectures on Physics
Doppler Effect, ScienceWorld
Wave mechanics
Radio frequency propagation
Radar signal processing
Sound
Acoustics | Doppler effect | [
"Physics"
] | 3,201 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Classical mechanics",
"Acoustics",
"Astrophysics",
"Waves",
"Wave mechanics",
"Doppler effects"
] |
8,727 | https://en.wikipedia.org/wiki/%CE%94T%20%28timekeeping%29 | In precise timekeeping, ΔT (Delta T, delta-T, deltaT, or DT) is a measure of the cumulative effect of the departure of the Earth's rotation period from the fixed-length day of International Atomic Time (86,400 seconds). Formally, ΔT is the time difference between Universal Time (UT, defined by Earth's rotation) and Terrestrial Time (TT, independent of Earth's rotation). The value of ΔT for the start of 1902 was approximately zero; for 2002 it was about 64 seconds. So Earth's rotations over that century took about 64 seconds longer than would be required for days of atomic time. As well as this long-term drift in the length of the day there are short-term fluctuations in the length of day () which are dealt with separately.
Since early 2017, the length of the day has happened to be very close to the conventional value, and ΔT has remained within half a second of 69 seconds.
Calculation
Earth's rotational speed is , and a day corresponds to one period . A rotational acceleration gives a rate of change of the period of , which is usually expressed as . This has dimension of reciprocal time and is commonly reported in units of milliseconds-per-day per century, symbolized as ms/day/cy (understood as (ms/day)/cy). Integrating gives an expression for ΔT against time.
Universal time
Universal Time is a time scale based on the Earth's rotation, which is somewhat irregular over short periods (days up to a century), thus any time based on it cannot have an accuracy better than 1 in 108. However, a larger, more consistent effect has been observed over many centuries: Earth's rate of rotation is inexorably slowing down. This observed change in the rate of rotation is attributable to two primary forces, one decreasing and one increasing the Earth's rate of rotation. Over the long term, the dominating force is tidal friction, which is slowing the rate of rotation, contributing about ms/day/cy or ms/cy, which is equal to the very small fractional change day/day. The most important force acting in the opposite direction, to speed up the rate, is believed to be a result of the melting of continental ice sheets at the end of the last glacial period. This removed their tremendous weight, allowing the land under them to begin to rebound upward in the polar regions, an effect that is still occurring today and will continue until isostatic equilibrium is reached. This "post-glacial rebound" brings mass closer to the rotational axis of the Earth, which makes the Earth spin faster, according to the law of conservation of angular momentum, similar to an ice skater pulling their arms in to spin faster. Models estimate this effect to contribute about −0.6 ms/day/cy. Combining these two effects, the net acceleration (actually a deceleration) of the rotation of the Earth, or the change in the length of the mean solar day (LOD), is +1.7 ms/day/cy or +62 s/cy2 or +46.5 ns/day2. This matches the average rate derived from astronomical records over the past 27 centuries.
Terrestrial time
Terrestrial Time is a theoretical uniform time scale, defined to provide continuity with the former Ephemeris Time (ET). ET was an independent time-variable, proposed (and its adoption agreed) in the period 1948–1952 with the intent of forming a gravitationally uniform time scale as far as was feasible at that time, and depending for its definition on Simon Newcomb's Tables of the Sun (1895), interpreted in a new way to accommodate certain observed discrepancies. Newcomb's tables formed the basis of all astronomical ephemerides of the Sun from 1900 through 1983: they were originally expressed (and published) in terms of Greenwich Mean Time and the mean solar day, but later, in respect of the period 1960–1983, they were treated as expressed in terms of ET, in accordance with the adopted ET proposal of 1948–52. ET, in turn, can now be seen (in light of modern results) as close to the average mean solar time between 1750 and 1890 (centered on 1820), because that was the period during which the observations on which Newcomb's tables were based were performed. While TT is strictly uniform (being based on the SI second, every second is the same as every other second), it is in practice realised by International Atomic Time (TAI) with an accuracy of about 1 part in 1014.
Earth's rate of rotation
Earth's rate of rotation must be integrated to obtain time, which is Earth's angular position (specifically, the orientation of the meridian of Greenwich relative to the fictitious mean sun). Integrating +1.7 ms/d/cy and centering the resulting parabola on the year 1820 yields (to a first approximation) seconds for ΔT. Smoothed historical measurements of ΔT using total solar eclipses are about +17190 s in the year −500 (501 BC), +10580 s in 0 (1 BC), +5710 s in 500, +1570 s in 1000, and +200 s in 1500. After the invention of the telescope, measurements were made by observing occultations of stars by the Moon, which allowed the derivation of more closely spaced and more accurate values for ΔT. ΔT continued to decrease until it reached a plateau of +11 ± 6 s between 1680 and 1866. For about three decades immediately before 1902 it was negative, reaching −6.64 s. Then it increased to +63.83 s in January 2000 and +68.97 s in January 2018 and +69.361 s in January 2020, after even a slight decrease from 69.358 s in July 2019 to 69.338 s in September and October 2019 and a new increase in November and December 2019. This will require the addition of an ever-greater number of leap seconds to UTC as long as UTC tracks UT1 with one-second adjustments. (The SI second as now used for UTC, when adopted, was already a little shorter than the current value of the second of mean solar time.) Physically, the meridian of Greenwich in Universal Time is almost always to the east of the meridian in Terrestrial Time, both in the past and in the future. +17190 s or about h corresponds to 71.625°E. This means that in the year −500 (501 BC), Earth's faster rotation would cause a total solar eclipse to occur 71.625° to the east of the location calculated using the uniform TT.
Values prior to 1955
All values of ΔT before 1955 depend on observations of the Moon, either via eclipses or occultations. The angular momentum lost by the Earth due to friction induced by the Moon's tidal effect is transferred to the Moon, increasing its angular momentum, which means that its moment arm (approximately its distance from the Earth, i.e. precisely the semi-major axis of the Moon's orbit) is increased (for the time being about +3.8 cm/year), which via Kepler's laws of planetary motion causes the Moon to revolve around the Earth at a slower rate. The cited values of ΔT assume that the lunar acceleration (actually a deceleration, that is a negative acceleration) due to this effect is = −26″/cy2, where is the mean sidereal angular motion of the Moon. This is close to the best estimate for as of 2002 of −25.858 ± 0.003″/cy2, so ΔT need not be recalculated given the uncertainties and smoothing applied to its current values. Nowadays, UT is the observed orientation of the Earth relative to an inertial reference frame formed by extra-galactic radio sources, modified by an adopted ratio between sidereal time and solar time. Its measurement by several observatories is coordinated by the International Earth Rotation and Reference Systems Service (IERS).
Current values
Recall by definition. While TT is only theoretical, it is commonly realized as TAI + 32.184 seconds where TAI is UTC plus the current leap seconds, so .
This can be rewritten as , where DUT1 is UT1 − UTC. The value of DUT1 is sent out in the weekly IERS Bulletin A, as well as several time signal services, and by extension serve as a source of the current .
Geological evidence
Tidal deceleration rates have varied over the history of the Earth-Moon system. Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then. Based on geological studies of tidal rhythmites, the day was 21.9±0.4 hours long 620 million years ago and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies.
Notes
References
McCarthy, D.D. & Seidelmann, P.K. TIME: From Earth Rotation to Atomic Physics. Weinheim: Wiley-VCH. (2009).
Morrison, L.V. & Stephenson, F. R. "Historical values of the Earth's clock error ΔT and the calculation of eclipses" (pdf, 862 KB), Journal for the History of Astronomy 35 (2004) 327–336.
Stephenson, F.R. Historical Eclipses and Earth's Rotation. Cambridge University Press, 1997.
Stephenson, F. R. & Morrison, L.V. "Long-term fluctuations in the Earth's rotation: 700 BC to AD 1990". Philosophical Transactions of the Royal Society of London, Series A 351 (1995) 165–202. JSTOR link. Includes evidence that the 'growth' in Delta-T is being modified by an oscillation with a wavelength around 1500 years; if that is true, then during the next few centuries Delta-T values will increase more slowly than is envisaged.
External links
IERS Rapid Service-Prediction Center Values for Delta T.
Delta T webpage by Robert van Gent
Delta T webpage by Felix Verbelen (archived from the original dead URL)
Eclipse Predictions and Earth's Rotation by Fred Espenak (archived from the original dead URL)
Polynomial expressions for Delta T (ΔT) Espenak and Meeus
Delta-T Charts and data software (archived from the original dead URL)
Time scales | ΔT (timekeeping) | [
"Physics",
"Astronomy"
] | 2,227 | [
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
8,731 | https://en.wikipedia.org/wiki/Director%27s%20cut | In public use, a director's cut is the director's preferred version of a film (or video game, television episode, music video, commercial, etc.). It is generally considered a marketing term to represent the version of a film the director prefers, and is usually used as contrast to a theatrical release where the director did not have final cut privilege and did not agree with what was released. ("Cut" explicitly refers to the editing process.)
Most of the time, film directors do not have the "final cut" (final say on the version released to the public). Those with money invested in the film, such as the production companies, distributors, or studios, may make changes intended to make the film more profitable at the box office. In extreme cases that can sometimes mean a different ending, less ambiguity, or excluding scenes that would earn a more audience-restricting rating, but more often means that the film is simply shortened to provide more screenings per day.
With the rise of home video, the phrase became more generically used as a marketing term to communicate to consumers that this is the director's preferred edit of a film, and it implies the director was not happy with the version that was originally released. Sometimes there are big disagreements between the director's vision and the producer's vision, and the director's preferred edit is sought after by fans (for example Terry Gilliam's Brazil).
Not all films have separate "director's cuts", (often the director is happy with the theatrical release, even if they didn't have final cut privilege), and sometimes separate versions of films are released as "director's cuts" even if the director doesn't prefer them. Once such example is Ridley Scott's Alien, which had a "director's cut" released in 2003, even though the director said it was purely for "marketing purposes" and didn't represent his preferred vision for the film.
Sometimes alternate edits are released, which are not necessarily director's preferred cuts, but which showcase different visions for the project for fans to enjoy. Examples include James Cameron's Avatar, which was released as both a "Special Edition" and "Extended" cuts, and Peter Jackson's Lord of the Rings, which were released on home video as "Extended Editions". These versions do not represent the director's preferred visions.
The term since expanded to include media such as video games, comic books and music albums (the latter two of which don't actually have directors).
Original use of the phrase
Within the industry itself, a "director's cut" refers to a stage in the editing process, and is not usually what a director wants to release to the public, due to the fact it is unfinished. The editing process of a film is broken into stages: First is the assembly/rough cut, where all selected takes are put together in the order in which they should appear in the film. Next, the editor's cut is reduced from the rough cut; the editor may be guided by their own choices or following notes from the director or producers. Eventually is the final cut, which actually gets released or broadcast. In between the editor's cut and the final cut can come any number of fine cuts, including the director's cut. The director's cut may include unsatisfactory takes, a preliminary soundtrack, a lack of desired pick-up shots etc., which the director would not like to be shown but uses as a placeholder until satisfactory replacements can be inserted. This is still how the term is used within the film industry, as well as commercials, television, and music videos.
Inception
The trend of releasing alternate cuts of films for artistic reasons became prominent in the 1970s; in 1974, the "director's cut" of The Wild Bunch was shown theatrically in Los Angeles to sold-out audiences. The theatrical release of the film had cut 10 minutes to get an R rating, but this cut was hailed as superior and has now become the definitive one. Other early examples include George Lucas's first two films being re-released following the success of Star Wars, in cuts which more closely resembled his vision, or Peter Bogdanovich re-cutting The Last Picture Show several times. Charlie Chaplin also re-released all of his films in the 1970s, several of which were re-cut (Chaplin's re-release of The Gold Rush in the 1940s is almost certainly the earliest prominent example of a director's re-cut film being released to the public). A theatrical re-release of Close Encounters of the Third Kind used the phrase "Special Edition" to describe a cut which was closer to Spielberg's intent but had a compromised ending demanded by the studio.
As the home video industry rose in the early 1980s, video releases of director's cuts were sometimes created for the small but dedicated cult fan market. Los Angeles cable station Z Channel is also cited as significant in the popularization of alternate cuts. Early examples of films released in this manner include Michael Cimino's Heaven's Gate, where a longer cut was recalled from theatres but subsequently shown on cable and eventually released to home video; James Cameron's Aliens, where a video release restored 20 minutes the studio had insisted on cutting; Cameron also voluntarily made cuts to the theatrical version of The Abyss for pacing but restored them for a video release, and most famously, Ridley Scott's Blade Runner, where an alternate workprint version was released to fan acclaim, ultimately resulting in the 1992 recut. Scott later recut the film once more, releasing a version dubbed "The Final Cut" in 2007. This was the final re-cut and the first in which Scott maintained creative control over the final product, leading to The Final Cut being considered the definitive version of the film.
Criticism
Once distributors discovered that consumers would buy alternate versions of films, it became more common for films to have alternative versions released. And the original public meaning of a director's preferred vision has become ignored, leading to so-called "director's cuts" of films where the director prefers the theatrically released version (or when the director had actual final cut privilege in the first place). Such versions are often marketing ploys, assembled by simply restoring deleted scenes, sometimes adding as much as a half-hour to the length of the film without regard to pacing and storytelling.
As a result, the "director's cut" is often considered a misnomer. Some directors deliberately try to avoid labelling alternate versions as such (e.g. Peter Jackson and James Cameron; each using the phrases "Special Edition" or "Extended Edition" for alternate versions of their films).
Sometimes the term is used a marketing ploy. For example, Ridley Scott states on the director's commentary track of Alien that the original theatrical release was his "director's cut", and that the new version was released as a marketing ploy. Director Peter Bogdanovich, no stranger to director's cuts himself, cites Red River as an example where
Another way that released director's cuts can be compromised is when directors were never allowed to even shoot their vision, and thus when the film is re-cut, they must make do with the footage that exists. Examples of this include Terry Zwigoff's Bad Santa, Brian Helgeland's Payback, and most notably the Richard Donner re-cut of Superman II. Donner completed about 75 per cent of the shooting of the sequel during the shooting of the first one but was fired from the project. His director's cut of the film includes, among other things, screen test footage of stars Christopher Reeve and Margot Kidder, footage used in the first film, and entire scenes that were shot by replacement director Richard Lester which Donner dislikes but were required for story purposes.
On the other side, some critics (such as Roger Ebert) have approved of the use of the label in unsuccessful films that had been tampered with by studio executives, such as Sergio Leone's original cut of Once Upon a Time in America, and the moderately successful theatrical version of Daredevil, which were altered by studio interference for their theatrical release. Other well-received director's cuts include Ridley Scott's Kingdom of Heaven (with Empire magazine stating: "The added 45 minutes in the Director’s Cut are like pieces missing from a beautiful but incomplete puzzle"), or Sam Peckinpah's Pat Garrett and Billy the Kid, where the restored 115-minute cut is closer to the director's intent than the theatrical 105-minute cut (the actual director's cut was 122 minutes; it was never completed to Peckinpah's satisfaction, but was used as a guide for the restoration that was done after his death).
In some instances, such as Peter Weir's Picnic at Hanging Rock, Robert Wise's Star Trek: The Motion Picture, John Cassavetes's The Killing of a Chinese Bookie, Blake Edwards's Darling Lili and Francis Ford Coppola's The Godfather Coda, changes made to a director's cut resulted in a very similar runtime or a shorter, more compact cut. This generally happens when a distributor insists that a film be completed to meet a release date, but sometimes it is the result of removing scenes that the distributor insisted on inserting, as opposed to restoring scenes they insisted on cutting.
Extended cuts and special editions
(See Changes in Star Wars re-releases and E.T. the Extra-Terrestrial: The 20th Anniversary)
Separate to director's cuts are alternate cuts released as "special editions" or "extended cuts". These versions are often put together for home video for fans, and should not be confused with 'director's cuts'. For example, despite releasing extended versions of his The Lord of the Rings trilogy, Peter Jackson told IGN in 2019 that “the theatrical versions are the definitive versions, I regard the extended cuts as being a novelty for the fans that really want to see the extra material.”
James Cameron has shared similar sentiments regarding the special editions of his films, "What I put into theaters is the Director's Cut. Nothing was cut that I didn't want cut. All the extra scenes we've added back in are just a bonus for the fans." Similar statements were made by Ridley Scott for the 2003 'director's cut' of Alien.
Such alternate versions sometimes include changes to the special effects in addition to different editing, such as George Lucas's Star Wars films, and Steven Spielberg's E.T. the Extra-Terrestrial.
Extended or special editions can also apply to films that have been extended for television or cut out to fill time slots and long advertisement breaks, against the explicit wishes of the director, such as the TV versions of Dune (1984), The Warriors (1979), Superman (1978) and the Harry Potter films.
Examples of alternate cuts
The Lord of the Rings film series directed by Peter Jackson saw an "Extended Edition" release for each of the three films The Fellowship of the Ring (2001), The Two Towers (2002), and The Return of the King (2003) featuring an additional 30 minutes, 47 minutes and 51 minutes respectively of new scenes, special effects and music alongside fan-club credits. These versions of the films were not Jackson's preferred edit, however, they were simply extended versions for fans to enjoy at home.
Batman v Superman: Dawn of Justice directed by Zack Snyder had an "Ultimate Edition," which added back 31 minutes of footage cut for the theatrical release and received an R rating, released digitally on 28 June 2016, and on Blu-ray on 19 July 2016.
The film Justice League which suffered a very troubled production, was begun by Snyder, who completed a pre-postproduction director's cut but had to step down before completing the project due to his daughter's death. Joss Whedon was hired by the films' distributor Warner Bros. Pictures to complete the film, which was however heavily re-shot, re-edited and released in 2017 with Snyder retaining the directorial credit, to negative reception from general audience, fans and critics alike and a box office failure. Following a global fan campaign to which the director and members of the cast and crew showed support, Snyder was allowed to return and complete the project the way he intended it and a 4-hour version of the film dubbed Zack Snyder's Justice League with some additionally shot scenes at the end was released on March 18, 2021, on HBO Max to more favorable reviews than the original version. Snyder originally teased a 214-minute cut of the film that was supposed to be the theatrical version released in 2017 if he did not step down from the project.
Snyder has also confirmed that his Netflix distributed sci-fi film Rebel Moon – Part One: A Child of Fire (2023) and its sequel Rebel Moon – Part Two: The Scargiver (2024) would receive R-rated director's cuts with its new titles Rebel Moon – Chapter One: Chalice of Blood, and the sequel Rebel Moon – Chapter Two: Curse of Forgiveness (both 2024). The PG-13 initial versions of those films having been critically panned.
The film Caligula exists in at least 10 different officially released versions, ranging from a sub-90-minute television edit version of TV-14 (later TV-MA) for cable television to an unrated full pornographic version exceeding 3.5 hours. This is believed to be the largest amount of distinct versions of a single film. Among major studio films, the record is believed to be held by Blade Runner; the magazine Video Watchdog counted no less than seven distinct versions in a 1993 issue, before director Ridley Scott later released a "Final Cut" in 2007 to acclaim from critics including Roger Ebert who included it on his great movies list, The release of Blade Runner: The Final Cut brings the supposed grand total to eight differing versions of Blade Runner.
Upon its release on DVD and Blu-ray in 2019, Fantastic Beasts: The Crimes of Grindelwald featured an extended cut with seven minutes of additional footage. This is the first time since Harry Potter and the Chamber of Secrets that a Wizarding World film has had one.
An animated example of an extended cut without the approval of the director was 1983's Twice Upon a Time, which was extended to have more profanity (supervised by co-writer and producer Bill Couturié) as opposed to co-director John Korty's original.
The Coen Brothers' Blood Simple is one of few examples that demonstrate director's cuts are not necessarily longer.
Music videos
The music video for the 2006 Academy Award-nominated song "Listen", performed by Beyoncé, received a director's cut by Diane Martel. This version of the video was later included on Knowles' B'Day Anthology Video Album (2007). Linkin Park has a director's cut version for their music video "Faint" (directed by Mark Romanek) in which one of the band members spray paints the words "En Proceso" on a wall, as well as Hoobastank also having one for 2004's "The Reason" which omits the woman getting hit by the car. Britney Spears' music video for 2007's "Gimme More" was first released as a director's cut on iTunes, with the official video released 3 days later. Many other director's cut music videos contain sexual content that can't be shown on TV thus creating alternative scenes, such as Thirty Seconds to Mars's "Hurricane", and in some cases, alternative videos, such as in the case of Spears' 2008 video for "Womanizer".
Expanded usage in pop culture
As the trend became more widely recognized, the term director's cut became increasingly used as a colloquialism to refer to an expanded version of other things, including video games, music, and comic books. This confusing usage only served to further reduce the artistic value of a director's cut, and it is currently rarely used in those ways.
Video games
For video games, these expanded versions, also referred as "complete editions", will have additions to the gameplay or additional game modes and features outside the main portion of the game.
As is the case with certain high-profile Japanese-produced games, the game designers may take the liberty to revise their product for the overseas market with additional features during the localization process. These features are later added back to the native market in a re-release of a game in what is often referred as the international version of the game. This was the case with the overseas versions of Final Fantasy VII, Metal Gear Solid and Rogue Galaxy, which contained additional features (such as new difficulty settings for Metal Gear Solid), resulting in re-released versions of those respective games in Japan (Final Fantasy VII International, Metal Gear Solid: Integral and Rogue Galaxy: Director's Cut). In the case of Metal Gear Solid 2: Sons of Liberty and Metal Gear Solid 3: Snake Eater, the American versions were released first, followed by the Japanese versions and then the European versions, with each regional release offering new content not found in the previous one. All of the added content from the Japanese and European versions of those games were included in the expanded editions titled Metal Gear Solid 2: Substance and Metal Gear Solid 3: Subsistence.
They also, similar to movies, will occasionally include extra, uncensored or alternate versions of cutscenes, as was the case with Resident Evil: Code Veronica X. In markets with strict censorship, a later relaxing of those laws occasional will result in the game being rereleased with the "Special/Uncut Edition" tag added to differentiate between the originally released censored version and the current uncensored edition.
Several of the Pokémon games have also received director's cuts and have used the term "extension", though "remake" and "third version" are also often used by many fans. These include Pocket Monsters: Blue (Japan only), Pokémon Yellow (for Pokémon Red and Green/Blue), Pokémon Crystal (for Pokémon Gold and Silver), Pokémon Emerald (for Pokémon Ruby and Sapphire), Pokémon Platinum (for Pokémon Diamond and Pearl) and Pokémon Ultra Sun and Ultra Moon.
For their PlayStation 5 "Director's Cut" releases of the PlayStation 4 games Ghost of Tsushima and Death Stranding both received expanded features on both games.
Music
"Director's cuts" in music are rarely released. A few exceptions include Guided by Voices' 1994 album Bee Thousand, which was re-released as a three disc vinyl LP director's cut in 2004, and Fall Out Boy's 2003 album Take This to Your Grave, which was re-released as a Director's cut in 2005 with two extra tracks.
In 2011 British singer Kate Bush released the album titled Director's Cut. It is made up of songs from her earlier albums The Sensual World and The Red Shoes which have been remixed and restructured, three of which were re-recorded completely.
See also
Artistic integrity
Cinephilia
The Criterion Collection
Fan edit
Film modification
References
External links
Movie-Censorship – detailed cuts comparisons
Director's Cuts: Do They Make the Cut? – Anthony Leong
If Movie Seems too Long, Blame It on the Director – Chris Hicks
The Rise and Falling Rise of the Director’s Cut – Gary D. Rhodes
Versions of works
Film and video terminology
Film post-production
Film production
Video game terminology
Video game development | Director's cut | [
"Technology"
] | 3,994 | [
"Computing terminology",
"Video game terminology"
] |
8,742 | https://en.wikipedia.org/wiki/Dublin%20Core | The Dublin Core vocabulary, also known as the Dublin Core Metadata Terms (DCMT), is a general purpose metadata vocabulary for describing resources of any type. It was first developed for describing web content in the early days of the World Wide Web. The Dublin Core Metadata Initiative (DCMI) is responsible for maintaining the Dublin
Core vocabulary.
Initially developed as fifteen terms in 1998 the set of elements has grown over time and in 2008 was redefined as an Resource Description Framework (RDF) vocabulary.
Designed with minimal constraints, each Dublin Core element is optional and may be repeated. There is no prescribed order in Dublin Core for presenting or using the elements.
Milestones
1995 - In 1995 an invitational meeting hosted by the Ohio College Library Center (OCLC) and the National Center for Supercomputing Applications (NCSA) takes place at Dublin, Ohio, the headquarters of OCLC.
1998, September - RFC 2413 "Dublin Core Metadata for Resource Discovery" details the original 15-element vocabulary.
2000 - Issuance of Qualified Dublin Core.
2001 - Publication of the Dublin Core Metadata Element Set as ANSI/NISO Z39.85.
2008 - Publication of Dublin Core Metadata Initiative Terms in RDF.
Evolution of the Dublin Core vocabulary
The Dublin Core Element Set was a response to concern about accurate finding of resources on the Web, with some early assumptions that this would be a library function. In particular it anticipated a future in which scholarly materials would be searchable on the World Wide Web. Whereas HTML was being used to mark-up the structure of documents, metadata was needed to mark-up the contents of documents. Given the great number of documents on, and soon to be on, the World Wide Web, it was proposed that "self-identifying" documents would be necessary.
To this end, the Dublin Core Metadata Workshop met beginning in 1995 to develop a vocabulary that could be used to insert consistent metadata into Web documents. Originally defined as 15 metadata elements, the Dublin Core Element Set allowed authors of web pages a vocabulary and method for creating simple metadata for their works. It provided a simple, flat element set that could be used
Qualified Dublin Core was developed in the late 1990s to provide an extension mechanism to the vocabulary of 15 elements. This was a response to communities whose metadata needs required additional detail.
In 2012, the DCMI Metadata Terms was created using a RDF data model. This expanded element set incorporates the original 15 elements and many of the qualifiers of the qualified Dublin Core as RDF properties. The full set of elements is found under the namespace http://purl.org/dc/terms/. There is a separate namespace for the original 15 elements as previously defined: http://purl.org/dc/elements/1.1/.
Dublin Core Metadata Element Set, 1995
The Dublin Core vocabulary published in 1999 consisted of 15 terms:
The vocabulary was commonly expressed in HTML 'meta' tagging in the "<head>" section of an HTML-encoded page.
The vocabulary could be used in any metadata serialization including key/value pairs and XML.
Qualified Dublin Core, 2000
Subsequent to the specification of the original 15 elements, Qualified Dublin Core was developed to provide an extension mechanism to be used when the primary 15 terms were not sufficient. A set of common refinements was provided in the documentation. These schemes include controlled vocabularies and formal notations or parsing rules. Qualified Dublin Core was not limited to these specific refinements, allowing communities to create extended metadata terms to meet their needs.
The guiding principle for the qualification of Dublin Core elements, colloquially known as the Dumb-Down Principle, states that an application that does not understand a specific element refinement term should be able to ignore the qualifier and treat the metadata value as if it were an unqualified (broader) element. While this may result in some loss of specificity, the remaining element value (without the qualifier) should continue to be generally correct and useful for discovery.
Qualified Dublin Core added qualifiers to these elements:
And added three elements not in the base 15:
Audience
Provenance
RightsHolder
Qualified Dublin Core is often used with a "dot syntax", with a period separating the element and the qualifier(s). This is shown in this excerpted example provided by Chan and Hodges:
Title: D-Lib Magazine
Title.alternative: Digital Library Magazine
Identifier.ISSN: 1082-9873
Publisher: Corporation for National Research Initiatives
Publisher.place: Reston, VA.
Subject.topical.LCSH: Digital libraries - Periodicals
DCMI Metadata Terms, 2008
The DCMI Metadata Terms lists the current set of the Dublin Core vocabulary. This set includes the fifteen terms of the DCMES (in italic), as well as many of the qualified terms. Each term has a unique URI in the namespace http://purl.org/dc/terms, and all are defined as RDF properties.
It also includes these RDF classes which are used as domains and ranges of some properties:
Maintenance of the standard
Changes that are made to the Dublin Core standard are reviewed by a DCMI Usage Board within the context of a DCMI Namespace Policy. This policy describes how terms are assigned and also sets limits on the amount of editorial changes allowed to the labels, definitions, and usage comments.
Dublin Core as standards
The Dublin Core Metadata Terms vocabulary has been formally standardized internationally as ISO 15836 by the International Organization for Standardization (ISO) and as IETF RFC 5013 by the Internet Engineering Task Force (IETF),
as well as in the U.S. as ANSI/NISO Z39.85 by the National Information Standards Organization (NISO).
Syntax
Syntax choices for metadata expressed with the Dublin Core elements depend on context. Dublin Core concepts and semantics are designed to be syntax independent and apply to a variety of contexts, as long as the metadata is in a form suitable for interpretation by both machines and people.
Notable applications
One Document Type Definition based on Dublin Core is the Open Source Metadata Framework (OMF) specification. OMF is in turn used by Rarian (superseding ScrollKeeper), which is used by the GNOME desktop and KDE help browsers and the ScrollServer documentation server.
PBCore is also based on Dublin Core. The Zope CMF's Metadata products, used by the Plone, ERP5, the Nuxeo CPS Content management systems, SimpleDL, and Fedora Commons also implement Dublin Core. The EPUB e-book format uses Dublin Core metadata in the OPF file. Qualified Dublin Core is used in the DSpace archival management software.
The Australian Government Locator Service (AGLS) metadata standard is an application profile of Dublin Core.
See also
Metadata registry
Metadata Object Description Schema
Ontology (information science)
Open Archives Initiative (OAI)
Controlled vocabulary
Interoperability
Darwin Core, a Dublin Core extension for biodiversity informatics
References
External links
Dublin Core Metadata Initiative Publishes DCMI Abstract Model (Cover Pages, March 2005)
Dublin Core Generator A JavaScript/JQuery tool for working with Dublin core metadata code
Metadata Object Description Schema (MODS)
Archival science
Bibliography file formats
Information management
Interoperability
ISO standards
Knowledge representation
Library cataloging and classification
Metadata standards
Museology
Records management
Reference models
Semantic Web | Dublin Core | [
"Technology",
"Engineering"
] | 1,487 | [
"Information systems",
"Telecommunications engineering",
"Interoperability",
"Information management"
] |
8,743 | https://en.wikipedia.org/wiki/Document%20Object%20Model | The Document Object Model (DOM) is a cross-platform and language-independent interface that treats an HTML or XML document as a tree structure wherein each node is an object representing a part of the document. The DOM represents a document with a logical tree. Each branch of the tree ends in a node, and each node contains objects. DOM methods allow programmatic access to the tree; with them one can change the structure, style or content of a document. Nodes can have event handlers (also known as event listeners) attached to them. Once an event is triggered, the event handlers get executed.
The principal standardization of the DOM was handled by the World Wide Web Consortium (W3C), which last developed a recommendation in 2004. WHATWG took over the development of the standard, publishing it as a living document. The W3C now publishes stable snapshots of the WHATWG standard.
In HTML DOM (Document Object Model), every element is a node:
A document is a document node.
All HTML elements are element nodes.
All HTML attributes are attribute nodes.
Text inserted into HTML elements are text nodes.
Comments are comment nodes.
History
The history of the Document Object Model is intertwined with the history of the "browser wars" of the late 1990s between Netscape Navigator and Microsoft Internet Explorer, as well as with that of JavaScript and JScript, the first scripting languages to be widely implemented in the JavaScript engines of web browsers.
JavaScript was released by Netscape Communications in 1995 within Netscape Navigator 2.0. Netscape's competitor, Microsoft, released Internet Explorer 3.0 the following year with a reimplementation of JavaScript called JScript. JavaScript and JScript let web developers create web pages with client-side interactivity. The limited facilities for detecting user-generated events and modifying the HTML document in the first generation of these languages eventually became known as "DOM Level 0" or "Legacy DOM." No independent standard was developed for DOM Level 0, but it was partly described in the specifications for HTML 4.
Legacy DOM was limited in the kinds of elements that could be accessed. Form, link and image elements could be referenced with a hierarchical name that began with the root document object. A hierarchical name could make use of either the names or the sequential index of the traversed elements. For example, a form input element could be accessed as either document.myForm.myInput or document.forms[0].elements[0].
The Legacy DOM enabled client-side form validation and simple interface interactivity like creating tooltips.
In 1997, Netscape and Microsoft released version 4.0 of Netscape Navigator and Internet Explorer respectively, adding support for Dynamic HTML (DHTML) functionality enabling changes to a loaded HTML document. DHTML required extensions to the rudimentary document object that was available in the Legacy DOM implementations. Although the Legacy DOM implementations were largely compatible since JScript was based on JavaScript, the DHTML DOM extensions were developed in parallel by each browser maker and remained incompatible. These versions of the DOM became known as the "Intermediate DOM".
After the standardization of ECMAScript, the W3C DOM Working Group began drafting a standard DOM specification. The completed specification, known as "DOM Level 1", became a W3C Recommendation in late 1998. By 2005, large parts of W3C DOM were well-supported by common ECMAScript-enabled browsers, including Internet Explorer 6 (from 2001), Opera, Safari and Gecko-based browsers (like Mozilla, Firefox, SeaMonkey and Camino).
Standards
The W3C DOM Working Group published its final recommendation and subsequently disbanded in 2004. Development efforts migrated to the WHATWG, which continues to maintain a living standard. In 2009, the Web Applications group reorganized DOM activities at the W3C. In 2013, due to a lack of progress and the impending release of HTML5, the DOM Level 4 specification was reassigned to the HTML Working Group to expedite its completion. Meanwhile, in 2015, the Web Applications group was disbanded and DOM stewardship passed to the Web Platform group. Beginning with the publication of DOM Level 4 in 2015, the W3C creates new recommendations based on snapshots of the WHATWG standard.
DOM Level 1 provided a complete model for an entire HTML or XML document, including the means to change any portion of the document.
DOM Level 2 was published in late 2000. It introduced the getElementById function as well as an event model and support for XML namespaces and CSS.
DOM Level 3, published in April 2004, added support for XPath and keyboard event handling, as well as an interface for serializing documents as XML.
HTML5 was published in October 2014. Part of HTML5 had replaced DOM Level 2 HTML module.
DOM Level 4 was published in 2015 and retired in November 2020.
DOM 2020-06 was published in September 2021 as a W3C Recommendation. It is a snapshot of the WHATWG living standard.
Applications
Web browsers
To render a document such as a HTML page, most web browsers use an internal model similar to the DOM. The nodes of every document are organized in a tree structure, called the DOM tree, with the topmost node named as "Document object". When an HTML page is rendered in browsers, the browser downloads the HTML into local memory and automatically parses it to display the page on screen. However, the DOM does not necessarily need to be represented as a tree, and some browsers have used other internal models.
JavaScript
When a web page is loaded, the browser creates a Document Object Model of the page, which is an object oriented representation of an HTML document that acts as an interface between JavaScript and the document itself. This allows the creation of dynamic web pages, because within a page JavaScript can:
add, change, and remove any of the HTML elements and attributes
change any of the CSS styles
react to all the existing events
create new events
DOM tree structure
A Document Object Model (DOM) tree is a hierarchical representation of an HTML or XML document. It consists of a root node, which is the document itself, and a series of child nodes that represent the elements, attributes, and text content of the document. Each node in the tree has a parent node, except for the root node, and can have multiple child nodes.
Elements as nodes
Elements in an HTML or XML document are represented as nodes in the DOM tree. Each element node has a tag name and attributes, and can contain other element nodes or text nodes as children. For example, an HTML document with the following structure:<html>
<head>
<title>My Website</title>
</head>
<body>
<h1>Welcome to DOM</h1>
<p>This is my website.</p>
</body>
</html>will be represented in the DOM tree as:- Document (root)
- html
- head
- title
- "My Website"
- body
- h1
- "Welcome"
- p
- "This is my website."
Text nodes
Text content within an element is represented as a text node in the DOM tree. Text nodes do not have attributes or child nodes, and are always leaf nodes in the tree. For example, the text content "My Website" in the title element and "Welcome" in the h1 element in the above example are both represented as text nodes.
Attributes as properties
Attributes of an element are represented as properties of the element node in the DOM tree. For example, an element with the following HTML:<a href="https://example.com">Link</a>will be represented in the DOM tree as:- a
- href: "https://example.com"
- "Link"
Manipulating the DOM tree
The DOM tree can be manipulated using JavaScript or other programming languages. Common tasks include navigating the tree, adding, removing, and modifying nodes, and getting and setting the properties of nodes. The DOM API provides a set of methods and properties to perform these operations, such as getElementById, createElement, appendChild, and innerHTML.// Create the root element
var root = document.createElement("root");
// Create a child element
var child = document.createElement("child");
// Add the child element to the root element
root.appendChild(child);Another way to create a DOM structure is using the innerHTML property to insert HTML code as a string, creating the elements and children in the process. For example:document.getElementById("root").innerHTML = "<child></child>";Another method is to use a JavaScript library or framework such as jQuery, AngularJS, React, Vue.js, etc. These libraries provide a more convenient, eloquent and efficient way to create, manipulate and interact with the DOM.
It is also possible to create a DOM structure from an XML or JSON data, using JavaScript methods to parse the data and create the nodes accordingly.
Creating a DOM structure does not necessarily mean that it will be displayed in the web page, it only exists in memory and should be appended to the document body or a specific container to be rendered.
In summary, creating a DOM structure involves creating individual nodes and organizing them in a hierarchical structure using JavaScript or other programming languages, and it can be done using several methods depending on the use case and the developer's preference.
Implementations
Because the DOM supports navigation in any direction (e.g., parent and previous sibling) and allows for arbitrary modifications, implementations typically buffer the document. However, a DOM need not originate in a serialized document at all, but can be created in place with the DOM API. And even before the idea of the DOM originated, there were implementations of equivalent structure with persistent disk representation and rapid access, for example DynaText's model disclosed in and various database approaches.
Layout engines
Web browsers rely on layout engines to parse HTML into a DOM. Some layout engines, such as Trident/MSHTML, are associated primarily or exclusively with a particular browser, such as Internet Explorer. Others, including Blink, WebKit, and Gecko, are shared by a number of browsers, such as Google Chrome, Opera, Safari, and Firefox. The different layout engines implement the DOM standards to varying degrees of compliance.
Libraries
DOM implementations:
libxml2
MSXML
Xerces is a collection of DOM implementations written in C++, Java and Perl
xml.dom for Python
XML for <SCRIPT> is a JavaScript-based DOM implementation
PHP.Gt DOM is a server-side DOM implementation based on libxml2 and brings DOM level 4 compatibility to the PHP programming language
Domino is a Server-side (Node.js) DOM implementation based on Mozilla's dom.js. Domino is used in the MediaWiki stack with Visual Editor.
SimpleHtmlDom is a simple HTML document object model in C#, which can generate HTML string programmatically.
APIs that expose DOM implementations:
JAXP (Java API for XML Processing) is an API for accessing DOM providers
Lazarus (Free Pascal IDE) contains two variants of the DOM - with UTF-8 and ANSI format
Inspection tools:
DOM Inspector is a web developer tool
See also
Shadow DOM
Virtual DOM
References
General references
External links
DOM Living Standard by the WHATWG
Original W3C DOM hub by the W3C DOM Working Group (outdated)
Latest snapshots of the WHATWG living standard published by the W3C HTML Working Group
Web Platform Working Group (current steward of W3C DOM)
Application programming interfaces
HTML
Object models
World Wide Web Consortium standards
XML-based standards | Document Object Model | [
"Technology"
] | 2,452 | [
"Computer standards",
"XML-based standards"
] |
8,745 | https://en.wikipedia.org/wiki/Design%20pattern | A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, particularly software engineering.
Details
An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with.
Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from "conflicting forces"—such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern.
Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time".
For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both within the context for the pattern "A PLACE TO WAIT".
Examples
Software design pattern, in software design
Architectural pattern, for software architecture
Interaction design pattern, used in interaction design / human–computer interaction
Pedagogical patterns, in teaching
Pattern gardening, in gardening
Business models also have design patterns. See .
See also
Style guide
Design paradigm
Anti-pattern
Dark pattern
References
Further reading
ja:デザインパターン
pl:Wzorzec projektowy
tr:Tasarım örüntüsü
vi:Mẫu thiết kế
zh:设计模式 | Design pattern | [
"Engineering"
] | 506 | [
"Design patterns",
"Design"
] |
8,748 | https://en.wikipedia.org/wiki/N%2CN-Dimethyltryptamine | N,N-Dimethyltryptamine (DMT or N,N-DMT) is a substituted tryptamine that occurs in many plants and animals, including humans, and which is both a derivative and a structural analog of tryptamine. DMT is used as a psychedelic drug and prepared by various cultures for ritual purposes as an entheogen.
DMT has a rapid onset, intense effects, and a relatively short duration of action. For those reasons, DMT was known as the "businessman's trip" during the 1960s in the United States, as a user could access the full depth of a psychedelic experience in considerably less time than with other substances such as LSD or psilocybin mushrooms. DMT can be inhaled, ingested, or injected and its effects depend on the dose, as well as the mode of administration. When inhaled or injected, the effects last about five to fifteen minutes. Effects can last three hours or more when orally ingested along with a monoamine oxidase inhibitor (MAOI), such as the ayahuasca brew of many native Amazonian tribes. DMT can produce vivid "projections" of mystical experiences involving euphoria and dynamic pseudohallucinations of geometric forms.
DMT is a functional analog and structural analog of other psychedelic tryptamines such as O-acetylpsilocin (4-AcO-DMT), psilocybin (4-PO-DMT), psilocin (4-HO-DMT), NB-DMT, O-methylbufotenin (5-MeO-DMT), and bufotenin (5-HO-DMT). Parts of the structure of DMT occur within some important biomolecules like serotonin and melatonin, making them structural analogs of DMT.
Human consumption
DMT is produced in many species of plants often in conjunction with its close chemical relatives 5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT) and bufotenin (5-OH-DMT). DMT-containing plants are commonly used in indigenous Amazonian shamanic practices. It is usually one of the main active constituents of the drink ayahuasca; however, ayahuasca is sometimes brewed with plants that do not produce DMT. It occurs as the primary psychoactive alkaloid in several plants including Mimosa tenuiflora, Diplopterys cabrerana, and Psychotria viridis. DMT is found as a minor alkaloid in snuff made from Virola bark resin in which 5-MeO-DMT is the main active alkaloid. DMT is also found as a minor alkaloid in bark, pods, and beans of Anadenanthera peregrina and Anadenanthera colubrina used to make Yopo and Vilca snuff, in which bufotenin is the main active alkaloid. Psilocin and psilocybin, the main psychoactive compounds in psilocybin mushrooms, are structurally similar to DMT.
The psychotropic effects of DMT were first studied scientifically by the Hungarian chemist and psychologist Stephen Szára, who performed research with volunteers in the mid-1950s. Szára, who later worked for the United States National Institutes of Health, had turned his attention to DMT after his order for LSD from the Swiss company Sandoz Laboratories was rejected on the grounds that the powerful psychotropic could be dangerous in the hands of a communist country.
DMT is generally not active orally unless it is combined with a monoamine oxidase inhibitor such as a reversible inhibitor of monoamine oxidase A (RIMA), for example, harmaline. Without a MAOI, the body quickly metabolizes orally administered DMT, and it therefore has no hallucinogenic effect unless the dose exceeds the body's monoamine oxidase's metabolic capacity. Other means of consumption such as vaporizing, injecting, or insufflating the drug can produce powerful hallucinations for a short time (usually less than half an hour), as the DMT reaches the brain before it can be metabolized by the body's natural monoamine oxidase. Taking an MAOI prior to vaporizing or injecting DMT prolongs and enhances the effects.
Clinical use research
Existing research on clinical use of DMT mostly focuses on its effects when exogenously administered as a drug. Although the scientific consensus is that DMT is a naturally occurring molecule in humans, the effects of endogenous DMT in humans (and more broadly in mammals) is still not well understood.
Dimethyltryptamine (DMT), an endogenous ligand of sigma-1 receptors (Sig-1Rs), acts against systemic hypoxia. Research demonstrates DMT reduces the number of apoptotic and ferroptotic cells in mammalian forebrain and supports astrocyte survival in an ischemic environment. According to these data, DMT may be considered as adjuvant pharmacological therapy in the management of acute cerebral ischemia.
DMT is studied as a potential treatment for Parkinson's disease in a Phase 1/2 clinical trial.
SPL026 (DMT fumarate) is currently undergoing phase II clinical trials investigating its use alongside supportive psychotherapy as a potential treatment for major depressive disorder. Additionally, a safety study is underway to investigate the effects of combining SSRIs with SPL026.
Neuropharmacology
Recently, researchers discovered that N,N-dimethyltryptamine is a potent psychoplastogen, a compound capable of promoting rapid and sustained neuroplasticity that may have wide-ranging therapeutic benefit.
Quantities of dimethyltryptamine and O-methylbufotenin were found present in the cerebrospinal fluid of humans in a psychiatric study.
Effects
Subjective psychedelic experiences
Subjective experiences of DMT includes profound time-dilatory, visual, auditory, tactile, and proprioceptive distortions and hallucinations, and other experiences that, by most firsthand accounts, defy verbal or visual description. Examples include perceiving hyperbolic geometry or seeing Escher-like impossible objects.
Several scientific experimental studies have tried to measure subjective experiences of altered states of consciousness induced by drugs under highly controlled and safe conditions.
Rick Strassman and his colleagues conducted a five-year-long DMT study at the University of New Mexico in the 1990s. The results provided insight about the quality of subjective psychedelic experiences. In this study participants received the DMT dosage via intravenous injection and the findings suggested that different psychedelic experiences can occur, depending on the level of dosage. Lower doses (0.01 and 0.05 mg/kg) produced some aesthetic and emotional responses, but not hallucinogenic experiences (e.g., 0.05 mg/kg had mild mood elevating and calming properties). In contrast, responses produced by higher doses (0.2 and 0.4 mg/kg) researchers labeled as "hallucinogenic" that elicited "intensely colored, rapidly moving display of visual images, formed, abstract or both". Comparing to other sensory modalities, the most affected was the visual. Participants reported visual hallucinations, fewer auditory hallucinations and specific physical sensations progressing to a sense of bodily dissociation, as well as experiences of euphoria, calm, fear, and anxiety. These dose-dependent effects match well with anonymously posted "trip reports" online, where users report "breakthroughs" above certain doses.
Strassman also highlighted the importance of the context where the drug has been taken. He claimed that DMT has no beneficial effects of itself, rather the context when and where people take it plays an important role.
It appears that DMT can induce a state or feeling wherein the person believes to "communicate with other intelligent lifeforms" (see "machine elves"). High doses of DMT produce a state that involves a sense of "another intelligence" that people sometimes describe as "super-intelligent", but "emotionally detached".
A 1995 study by Adolf Dittrich and Daniel Lamparter found that the DMT-induced altered state of consciousness (ASC) is strongly influenced by habitual rather than situative factors. In the study, researchers used three dimensions of the APZ questionnaire to examine ASC. The first dimension, oceanic boundlessness (OB), refers to dissolution of ego boundaries and is mostly associated with positive emotions. The second dimension, anxious ego-dissolution (AED), represents a disordering of thoughts and decreases in autonomy and self-control. Last, visionary restructuralization (VR) refers to auditory/visual illusions and hallucinations. Results showed strong effects within the first and third dimensions for all conditions, especially with DMT, and suggested strong intrastability of elicited reactions independently of the condition for the OB and VR scales.
Reported encounters with external entities
Entities perceived during DMT inebriation have been represented in diverse forms of psychedelic art. The term machine elf was coined by ethnobotanist Terence McKenna for the entities he encountered in DMT "hyperspace", also using terms like fractal elves, or self-transforming machine elves. McKenna first encountered the "machine elves" after smoking DMT in Berkeley in 1965. His subsequent speculations regarding the hyperdimensional space in which they were encountered have inspired a great many artists and musicians, and the meaning of DMT entities has been a subject of considerable debate among participants in a networked cultural underground, enthused by McKenna's effusive accounts of DMT hyperspace. Cliff Pickover has also written about the "machine elf" experience, in the book Sex, Drugs, Einstein, & Elves. Strassman noted similarities between self-reports of his DMT study participants' encounters with these "entities", and mythological descriptions of figures such as Ḥayyot haq-Qodesh in ancient religions, including both angels and demons. Strassman also argues for a similarity in his study participants' descriptions of mechanized wheels, gears and machinery in these encounters, with those described in visions of encounters with the Living Creatures and Ophanim of the Hebrew Bible, noting they may stem from a common neuropsychopharmacological experience.
Strassman argues that the more positive of the "external entities" encountered in DMT experiences should be understood as analogous to certain forms of angels:
Strassman's experimental participants also note that some other entities can subjectively resemble creatures more like insects and aliens. As a result, Strassman writes these experiences among his experimental participants "also left me feeling confused and concerned about where the spirit molecule was leading us. It was at this point that I began to wonder if I was getting in over my head with this research."
Hallucinations of strange creatures had been reported by Stephen Szara in a 1958 study in psychotic patients, in which he described how one of his subjects under the influence of DMT had experienced "strange creatures, dwarves or something" at the beginning of a DMT trip.
Other researchers of the entities seemingly encountered by DMT users describe them as "entities" or "beings" in humanoid as well as animal form, with descriptions of "little people" being common (non-human gnomes, elves, imps, etc.). Strassman and others have speculated that this form of hallucination may be the cause of alien abduction and extraterrestrial encounter experiences, which may occur through endogenously-occurring DMT.
Likening them to descriptions of rattling and chattering auditory phenomena described in encounters with the Hayyoth in the Book of Ezekiel, Rick Strassman notes that participants in his studies, when reporting encounters with the alleged entities, have also described loud auditory hallucinations, such as one subject reporting typically "the elves laughing or talking at high volume, chattering, twittering".
Near-death experience
A 2018 study found significant relationships between a DMT experience and a near-death experience (NDE). A 2019 large-scale study pointed that ketamine, Salvia divinorum, and DMT (and other classical psychedelic substances) may be linked to near-death experiences due to the semantic similarity of reports associated with the use of psychoactive compounds and NDE narratives, but the study concluded that with the current data it is neither possible to corroborate nor refute the hypothesis that the release of an endogenous ketamine-like neuroprotective agent underlies NDE phenomenology.
Physiological response
According to a dose-response study in human subjects, dimethyltryptamine administered intravenously slightly elevated blood pressure, heart rate, pupil diameter, and rectal temperature, in addition to elevating blood concentrations of beta-endorphin, corticotropin, cortisol, and prolactin; growth hormone blood levels rise equally in response to all doses of DMT, and melatonin levels were unaffected."
Conjecture regarding endogenous production and effects
In the 1950s, the endogenous production of psychoactive agents was considered to be a potential explanation for the hallucinatory symptoms of some psychiatric diseases; this is known as the transmethylation hypothesis. Several speculative and yet untested hypotheses suggest that endogenous DMT is produced in the human brain and is involved in certain psychological and neurological states. DMT is naturally occurring in small amounts in rat brains, human cerebrospinal fluid, and other tissues of humans and other mammals. Further, mRNA for the enzyme necessary for the production of DMT, INMT, are expressed in the human cerebral cortex, choroid plexus, and pineal gland, suggesting an endogenous role in the human brain. In 2011, Nicholas Cozzi of the University of Wisconsin School of Medicine and Public Health, and three other researchers, concluded that INMT, an enzyme that is associated with the biosynthesis of DMT and endogenous hallucinogens is present in the non-human primate (rhesus macaque) pineal gland, retinal ganglion neurons, and spinal cord. Neurobiologist Andrew Gallimore (2013) suggested that while DMT might not have a modern neural function, it may have been an ancestral neuromodulator once secreted in psychedelic concentrations during REM sleep, a function now lost.
Adverse effects
Acute adverse psychological reaction
DMT may trigger psychological reactions, known colloquially as a "bad trip", such as intense fear, paranoia, anxiety, panic attacks, and substance-induced psychosis, particularly in predisposed individuals.
Addiction and dependence liability
DMT, like other serotonergic psychedelics, is considered to be non-addictive with low abuse potential. A study examining substance use disorder for DSM-IV reported that almost no hallucinogens produced dependence, unlike psychoactive drugs of other classes such as stimulants and depressants. At present, there have been no studies that report drug withdrawal syndrome with termination of DMT, and dependence potential of DMT and the risk of sustained psychological disturbance may be minimal when used infrequently; however, the physiological dependence potential of DMT and ayahuasca has not yet been documented convincingly.
Tolerance
Unlike other classical psychedelics, tolerance does not seem to develop to the subjective effects of DMT. Studies report that DMT did not exhibit tolerance upon repeated administration of twice a day sessions, separated by 5hours, for 5consecutive days; field reports suggests a refractory period of only 15 to 30minutes, while the plasma levels of DMT was nearly undetectable 30minutes after intravenous administration. Another study of four closely spaced DMT infusion sessions with 30minute intervals also suggests no tolerance buildup to the psychological effects of the compound, while heart rate responses and neuroendocrine effects were diminished with repeated administration. Similarly to DMT by itself, tolerance does not appear to develop to ayahuasca. A fully hallucinogenic dose of DMT did not demonstrate cross-tolerance to human subjects who are highly tolerant to LSD; researches suggest that DMT exhibits unique pharmacological properties compared to other classical psychedelics.
Long-term use
There have been no serious adverse effects reported on long-term use of DMT, apart from acute cardiovascular events. Repeated and one-time administration of DMT produces marked changes in the cardiovascular system, with an increase in systolic and diastolic blood pressure; although the changes were not statistically significant, a robust trend towards significance was observed for systolic blood pressure at high doses.
Drug-interactions
DMT is inactive when ingested orally due to metabolism by MAO, and DMT-containing drinks such as ayahuasca have been found to contain MAOIs, in particular, harmine and harmaline. Life-threatening lethalities such as serotonin syndrome (SS) may occur when MAOIs are combined with certain serotonergic medications such as SSRI antidepressants. Serotonin syndrome has also been reported with tricyclic antidepressants, opiates, analgesic, and antimigraine drugs; it is advised to exercise caution when an individual had used dextromethorphan (DXM), MDMA, ginseng, or St. John's wort recently.
Chronic use of SSRIs, TCAs, and MAOIs diminish subjective effects of psychedelics due to presumed SSRI-induced 5-HT2A receptors downregulation and MAOI-induced 5-HT2A receptor desensitization. The interaction between psychedelics and antipsychotics and anticonvulsant are not well documented, however reports reveal that co-use of psychedelics with mood stabilizers such as lithium may provoke seizure and dissociative effects in individuals with bipolar disorder.
Routes of administration
Inhalation
A standard dose for vaporized DMT is 20–60 milligrams, depending highly on the efficiency of vaporization as well as body weight and personal variation. In general, this is inhaled in a few successive breaths, but lower doses can be used if the user can inhale it in fewer breaths (ideally one). The effects last for a short period of time, usually 5 to 15 minutes, dependent on the dose. The onset after inhalation is very fast (less than 45 seconds) and peak effects are reached within a minute. In the 1960s, DMT was known as a "businessman's trip" in the US because of the relatively short duration (and rapid onset) of action when inhaled. DMT can be inhaled using a bong, typically when sandwiched between layers of plant matter, using a specially designed pipe, or by using an e-cigarette once it has been dissolved in propylene glycol and/or vegetable glycerin. Some users have also started using vaporizers meant for cannabis extracts ("wax pens") for ease of temperature control when vaporizing crystals. A DMT-infused smoking blend is called Changa, and is typically used in pipes or other utensils meant for smoking dried plant matter.
Intravenous injection
In a study conducted from 1990 through 1995, University of New Mexico psychiatrist Rick Strassman found that some volunteers injected with high doses of DMT reported experiences with perceived alien entities. Usually, the reported entities were experienced as the inhabitants of a perceived independent reality that the subjects reported visiting while under the influence of DMT.
In 2023, a study investigated a novel method of DMT administration involving a bolus injection paired with a constant-rate infusion, with the goal of extending the DMT experience.
Oral
DMT is broken down by the enzyme monoamine oxidase through a process called deamination, and is quickly inactivated orally unless combined with a monoamine oxidase inhibitor (MAOI). The traditional South American beverage ayahuasca is derived by boiling Banisteriopsis caapi with leaves of one or more plants containing DMT, such as Psychotria viridis, Psychotria carthagenensis, or Diplopterys cabrerana. The Banisteriopsis caapi contains harmala alkaloids, a highly active reversible inhibitor of monoamine oxidase A (RIMAs), rendering the DMT orally active by protecting it from deamination. A variety of different recipes are used to make the brew depending on the purpose of the ayahuasca session, or local availability of ingredients. Two common sources of DMT in the western US are reed canary grass (Phalaris arundinacea) and Harding grass (Phalaris aquatica). These invasive grasses contain low levels of DMT and other alkaloids but also contain gramine, which is toxic and difficult to separate. In addition, Jurema (Mimosa tenuiflora) shows evidence of DMT content: the pink layer in the inner rootbark of this small tree contains a high concentration of N,N-DMT.
Taken orally with an RIMA, DMT produces a long-lasting (over three hours), slow, deep metaphysical experience similar to that of psilocybin mushrooms, but more intense.
The intensity of orally administered DMT depends on the type and dose of MAOI administered alongside it. When ingested with 120 mg of harmine (a RIMA and member of the harmala alkaloids), 20 mg of DMT was reported to have psychoactive effects by author and ethnobotanist Jonathan Ott. Ott reported that to produce a visionary state, the threshold oral dose was 30 mg DMT alongside 120 mg harmine. This is not necessarily indicative of a standard dose, as dose-dependent effects may vary due to individual variations in drug metabolism.
History
Naturally occurring substances (of both vegetable and animal origin) containing DMT have been used in South America since pre-Columbian times.
DMT was first synthesized in 1931 by Canadian chemist Richard Helmuth Fredrick Manske. In general, its discovery as a natural product is credited to Brazilian chemist and microbiologist Oswaldo Gonçalves de Lima, who isolated an alkaloid he named nigerina (nigerine) from the root bark of Mimosa tenuiflora in 1946. However, in a careful review of the case Jonathan Ott shows that the empirical formula for nigerine determined by Gonçalves de Lima, which notably contains an atom of oxygen, can match only a partial, "impure" or "contaminated" form of DMT. It was only in 1959, when Gonçalves de Lima provided American chemists a sample of Mimosa tenuiflora roots, that DMT was unequivocally identified in this plant material. Less ambiguous is the case of isolation and formal identification of DMT in 1955 in seeds and pods of Anadenanthera peregrina by a team of American chemists led by Evan Horning (1916–1993). Since 1955, DMT has been found in a number of organisms: in at least fifty plant species belonging to ten families, and in at least four animal species, including one gorgonian and three mammalian species (including humans).
In terms of a scientific understanding, the hallucinogenic properties of DMT were not uncovered until 1956 by Hungarian chemist and psychiatrist Stephen Szara. In his paper Dimethyltryptamin: Its Metabolism in Man; the Relation of its Psychotic Effect to the Serotonin Metabolism, Szara employed synthetic DMT, synthesized by the method of Speeter and Anthony, which was then administered to 20 volunteers by intramuscular injection. Urine samples were collected from these volunteers for the identification of DMT metabolites. This is considered to be the converging link between the chemical structure DMT to its cultural consumption as a psychoactive and religious sacrament.
Another historical milestone is the discovery of DMT in plants frequently used by Amazonian natives as additive to the vine Banisteriopsis caapi to make ayahuasca decoctions. In 1957, American chemists Francis Hochstein and Anita Paradies identified DMT in an "aqueous extract" of leaves of a plant they named Prestonia amazonicum [sic] and described as "commonly mixed" with B. caapi. The lack of a proper botanical identification of Prestonia amazonica in this study led American ethnobotanist Richard Evans Schultes (1915–2001) and other scientists to raise serious doubts about the claimed plant identity. The mistake likely led the writer William Burroughs to regard the DMT he experimented with in Tangier in 1961 as "Prestonia". Better evidence was produced in 1965 by French pharmacologist Jacques Poisson, who isolated DMT as a sole alkaloid from leaves, provided and used by Aguaruna Indians, identified as having come from the vine Diplopterys cabrerana (then known as Banisteriopsis rusbyana). Published in 1970, the first identification of DMT in the plant Psychotria viridis, another common additive of ayahuasca, was made by a team of American researchers led by pharmacologist Ara der Marderosian. Not only did they detect DMT in leaves of P. viridis obtained from Kaxinawá indigenous people, but they also were the first to identify it in a sample of an ayahuasca decoction, prepared by the same indigenous people.
Chemistry
Appearance and form
DMT is commonly handled and stored as a hemifumarate, as other DMT acid salts are extremely hygroscopic and will not readily crystallize. Its freebase form, although less stable than DMT hemifumarate, is favored by recreational users choosing to vaporize the chemical as it has a lower boiling point.
DMT is a lipophilic compound, with an experimental log P of 2.57.
Synthesis
Biosynthesis
Dimethyltryptamine is an indole alkaloid derived from the shikimate pathway. Its biosynthesis is relatively simple and summarized in the adjacent picture. In plants, the parent amino acid -tryptophan is produced endogenously where in animals -tryptophan is an essential amino acid coming from diet. No matter the source of -tryptophan, the biosynthesis begins with its decarboxylation by an aromatic amino acid decarboxylase (AADC) enzyme (step 1). The resulting decarboxylated tryptophan analog is tryptamine. Tryptamine then undergoes a transmethylation (step 2): the enzyme indolethylamine-N-methyltransferase (INMT) catalyzes the transfer of a methyl group from cofactor S-adenosylmethionine (SAM), via nucleophilic attack, to tryptamine. This reaction transforms SAM into S-adenosylhomocysteine (SAH), and gives the intermediate product N-methyltryptamine (NMT). NMT is in turn transmethylated by the same process (step 3) to form the end product N,N-dimethyltryptamine. Tryptamine transmethylation is regulated by two products of the reaction: SAH, and DMT were shown ex vivo to be among the most potent inhibitors of rabbit INMT activity.
This transmethylation mechanism has been repeatedly and consistently proven by radiolabeling of SAM methyl group with carbon-14 ((14C-CH3)SAM).
Laboratory synthesis
DMT can be synthesized through several possible pathways from different starting materials. The two most commonly encountered synthetic routes are through the reaction of indole with oxalyl chloride followed by reaction with dimethylamine and reduction of the carbonyl functionalities with lithium aluminium hydride to form DMT. The second commonly encountered route is through the N,N-dimethylation of tryptamine using formaldehyde followed by reduction with sodium cyanoborohydride or sodium triacetoxyborohydride. Sodium borohydride can be used but requires a larger excess of reagents and lower temperatures due to it having a higher selectivity for carbonyl groups as opposed to imines. Procedures using sodium cyanoborohydride and sodium triacetoxyborohydride (presumably created in situ from cyanoborohydride though this may not be the case due to the presence of water or methanol) also result in the creation of cyanated tryptamine and beta-carboline byproducts of unknown toxicity while using sodium borohydride in absence of acid does not. Bufotenine, a plant extract, can also be synthesized into DMT.
Alternatively, an excess of methyl iodide or methyl p-toluenesulfonate and sodium carbonate can be used to over-methylate tryptamine, resulting in the creation of a quaternary ammonium salt, which is then dequaternized (demethylated) in ethanolamine to yield DMT. The same two-step procedure is used to synthesize other N,N-dimethylated compounds, such as 5-MeO-DMT.
Clandestine manufacture
In a clandestine setting, DMT is not typically synthesized due to the lack of availability of the starting materials, namely tryptamine and oxalyl chloride. Instead, it is more often extracted from plant sources using a nonpolar hydrocarbon solvent such as naphtha or heptane, and a base such as sodium hydroxide.
Alternatively, an acid–base extraction is sometimes used instead.
A variety of plants contain DMT at sufficient levels for being viable sources, but specific plants such as Mimosa tenuiflora, Acacia acuminata and Acacia confusa are most often used.
The chemicals involved in the extraction are commonly available. The plant material may be illegal to procure in some countries. The end product (DMT) is illegal in most countries.
Evidence in mammals
Published in Science in 1961, Julius Axelrod found an N-methyltransferase enzyme capable of mediating biotransformation of tryptamine into DMT in a rabbit's lung. This finding initiated a still ongoing scientific interest in endogenous DMT production in humans and other mammals. From then on, two major complementary lines of evidence have been investigated: localization and further characterization of the N-methyltransferase enzyme, and analytical studies looking for endogenously produced DMT in body fluids and tissues.
In 2013, researchers reported DMT in the pineal gland microdialysate of rodents.
A study published in 2014 reported the biosynthesis of N,N-dimethyltryptamine (DMT) in the human melanoma cell line SK-Mel-147 including details on its metabolism by peroxidases.
It is assumed that more than half of the amount of DMT produced by the acidophilic cells of the pineal gland is secreted before and during death, the amount being 2.5–3.4 mg/kg. However, this claim by Strassman has been criticized by David Nichols who notes that DMT does not appear to be produced in any meaningful amount by the pineal gland. Removal or calcification of the pineal gland does not induce any of the symptoms caused by removal of DMT. The symptoms presented are consistent solely with reduction in melatonin, which is the pineal gland's known function. Nichols instead suggests that dynorphin and other endorphins are responsible for the reported euphoria experienced by patients during a near-death experience.
In 2014, researchers demonstrated the immunomodulatory potential of DMT and 5-MeO-DMT through the Sigma-1 receptor of human immune cells. This immunomodulatory activity may contribute to significant anti-inflammatory effects and tissue regeneration.
Endogenous DMT
N,N-Dimethyltryptamine (DMT), a psychedelic compound identified endogenously in mammals, is biosynthesized by aromatic -amino acid decarboxylase (AADC) and indolethylamine-N-methyltransferase (INMT). Studies have investigated brain expression of INMT transcript in rats and humans, coexpression of INMT and AADC mRNA in rat brain and periphery, and brain concentrations of DMT in rats. INMT transcripts were identified in the cerebral cortex, pineal gland, and choroid plexus of both rats and humans via in situ hybridization. Notably, INMT mRNA was colocalized with AADC transcript in rat brain tissues, in contrast to rat peripheral tissues where there existed little overlapping expression of INMT with AADC transcripts. Additionally, extracellular concentrations of DMT in the cerebral cortex of normal behaving rats, with or without the pineal gland, were similar to those of canonical monoamine neurotransmitters including serotonin. A significant increase of DMT levels in the rat visual cortex was observed following induction of experimental cardiac arrest, a finding independent of an intact pineal gland. These results show for the first time that the rat brain is capable of synthesizing and releasing DMT at concentrations comparable to known monoamine neurotransmitters and raise the possibility that this phenomenon may occur similarly in human brains.
The first claimed detection of endogenous DMT in mammals was published in June 1965: German researchers F. Franzen and H. Gross report to have evidenced and quantified DMT, along with its structural analog bufotenin (5-HO-DMT), in human blood and urine. In an article published four months later, the method used in their study was strongly criticized, and the credibility of their results challenged.
Few of the analytical methods used prior to 2001 to measure levels of endogenously formed DMT had enough sensitivity and selectivity to produce reliable results. Gas chromatography, preferably coupled to mass spectrometry (GC-MS), is considered a minimum requirement. A study published in 2005 implements the most sensitive and selective method ever used to measure endogenous DMT: liquid chromatography–tandem mass spectrometry with electrospray ionization (LC-ESI-MS/MS) allows for reaching limits of detection (LODs) 12 to 200 fold lower than those attained by the best methods employed in the 1970s. The data summarized in the table below are from studies conforming to the abovementioned requirements (abbreviations used: CSF = cerebrospinal fluid; LOD = limit of detection; n = number of samples; ng/L and ng/kg = nanograms (10−9 g) per litre, and nanograms per kilogram, respectively):
A 2013 study found DMT in microdialysate obtained from a rat's pineal gland, providing evidence of endogenous DMT in the mammalian brain. In 2019 experiments showed that the rat brain is capable of synthesizing and releasing DMT. These results raise the possibility that this phenomenon may occur similarly in human brains.
Detection in human body fluids
DMT may be measured in blood, plasma or urine using chromatographic techniques as a diagnostic tool in clinical poisoning situations or to aid in the medicolegal investigation of suspicious deaths. In general, blood or plasma DMT levels in recreational users of the drug are in the 10–30 μg/L range during the first several hours post-ingestion. Less than 0.1% of an oral dose is eliminated unchanged in the 24-hour urine of humans.
INMT
Before techniques of molecular biology were used to localize indolethylamine N-methyltransferase (INMT), characterization and localization went on a par: samples of the biological material where INMT is hypothesized to be active are subject to enzyme assay. Those enzyme assays are performed either with a radiolabeled methyl donor like (14C-CH3)SAM to which known amounts of unlabeled substrates like tryptamine are added or with addition of a radiolabeled substrate like (14C)NMT to demonstrate in vivo formation. As qualitative determination of the radioactively tagged product of the enzymatic reaction is sufficient to characterize INMT existence and activity (or lack of), analytical methods used in INMT assays are not required to be as sensitive as those needed to directly detect and quantify the minute amounts of endogenously formed DMT. The essentially qualitative method thin layer chromatography (TLC) was thus used in a vast majority of studies. Also, robust evidence that INMT can catalyze transmethylation of tryptamine into NMT and DMT could be provided with reverse isotope dilution analysis coupled to mass spectrometry for rabbit and human lung during the early 1970s.
Selectivity rather than sensitivity proved to be a challenge for some TLC methods with the discovery in 1974–1975 that incubating rat blood cells or brain tissue with (14C-CH3)SAM and NMT as substrate mostly yields tetrahydro-β-carboline derivatives, and negligible amounts of DMT in brain tissue. It is indeed simultaneously realized that the TLC methods used thus far in almost all published studies on INMT and DMT biosynthesis are incapable to resolve DMT from those tetrahydro-β-carbolines. These findings are a blow for all previous claims of evidence of INMT activity and DMT biosynthesis in avian and mammalian brain, including in vivo, as they all relied upon use of the problematic TLC methods: their validity is doubted in replication studies that make use of improved TLC methods, and fail to evidence DMT-producing INMT activity in rat and human brain tissues. Published in 1978, the last study attempting to evidence in vivo INMT activity and DMT production in brain (rat) with TLC methods finds biotransformation of radiolabeled tryptamine into DMT to be real but "insignificant". Capability of the method used in this latter study to resolve DMT from tetrahydro-β-carbolines is questioned later.
To localize INMT, a qualitative leap is accomplished with use of modern techniques of molecular biology, and of immunohistochemistry. In humans, a gene encoding INMT is determined to be located on chromosome 7. Northern blot analyses reveal INMT messenger RNA (mRNA) to be highly expressed in rabbit lung, and in human thyroid, adrenal gland, and lung. Intermediate levels of expression are found in human heart, skeletal muscle, trachea, stomach, small intestine, pancreas, testis, prostate, placenta, lymph node, and spinal cord. Low to very low levels of expression are noted in rabbit brain, and human thymus, liver, spleen, kidney, colon, ovary, and bone marrow. INMT mRNA expression is absent in human peripheral blood leukocytes, whole brain, and in tissue from seven specific brain regions (thalamus, subthalamic nucleus, caudate nucleus, hippocampus, amygdala, substantia nigra, and corpus callosum). Immunohistochemistry showed INMT to be present in large amounts in glandular epithelial cells of small and large intestines. In 2011, immunohistochemistry revealed the presence of INMT in primate nervous tissue including retina, spinal cord motor neurons, and pineal gland. A 2020 study using in-situ hybridization, a far more accurate tool than the northern blot analysis, found mRNA coding for INMT expressed in the human cerebral cortex, choroid plexus, and pineal gland.
Pharmacology
Pharmacodynamics
DMT binds non-selectively with affinities below 0.6 μmol/L to the following serotonin receptors: 5-HT1A, 5-HT1B, 5-HT1D, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT6, and 5-HT7. An agonist action has been determined at 5-HT1A, 5-HT2A and 5-HT2C. Its efficacies at other serotonin receptors remain to be determined. Of special interest will be the determination of its efficacy at human 5-HT2B receptor as two in vitro assays evidenced DMT's high affinity for this receptor: 0.108 μmol/L and 0.184 μmol/L. This may be of importance because chronic or frequent uses of serotonergic drugs showing preferential high affinity and clear agonism at 5-HT2B receptor have been causally linked to valvular heart disease.
It has also been shown to possess affinity for the dopamine D1, α1-adrenergic, α2-adrenergic, imidazoline-1, and σ1 receptors. Converging lines of evidence established activation of the σ1 receptor at concentrations of 50–100 μmol/L. Its efficacies at the other receptor binding sites are unclear. It has also been shown in vitro to be a substrate for the cell-surface serotonin transporter (SERT) expressed in human platelets, and the rat vesicular monoamine transporter 2 (VMAT2), which was transiently expressed in fall armyworm Sf9 cells. DMT inhibited SERT-mediated serotonin uptake into platelets at an average concentration of 4.00 ± 0.70 μmol/L and VMAT2-mediated serotonin uptake at an average concentration of 93 ± 6.8 μmol/L. In addition, DMT is a potent serotonin releasing agent with an value of 114nM.
As with other so-called "classical hallucinogens", a large part of DMT psychedelic effects can be attributed to a functionally selective activation of the 5-HT2A receptor. DMT concentrations eliciting 50% of its maximal effect (half maximal effective concentration = EC50) at the human 5-HT2A receptor in vitro are in the 0.118–0.983 μmol/L range. This range of values coincides well with the range of concentrations measured in blood and plasma after administration of a fully psychedelic dose (see Pharmacokinetics).
DMT is one of the only psychedelics that isn't known to produce tolerance to its hallucinogenic effects. The lack of tolerance with DMT may be related to the fact that, unlike other psychedelics such as LSD and DOI, DMT does not desensitize serotonin 5-HT2A receptors in vitro. This may be due to the fact that DMT is a biased agonist of the serotonin 5-HT2A receptor. More specifically, DMT activates the Gq signaling pathway of the serotonin 5-HT2A receptor without significantly recruiting β-arrestin2. Activation of β-arrestin2 is linked to receptor downregulation and tachyphylaxis. Similarly to DMT, 5-MeO-DMT is a biased agonist of the serotonin 5-HT2A receptor, with minimal β-arrestin2 recruitment, and likewise has been associated with little tolerance to its hallucinogenic effects.
As DMT has been shown to have slightly better efficacy (EC50) at human serotonin 2C receptor than at the 2A receptor, 5-HT2C is also likely implicated in DMT's overall effects. Other receptors such as 5-HT1A and σ1 may also play a role.
In 2009, it was hypothesized that DMT may be an endogenous ligand for the σ1 receptor. The concentration of DMT needed for σ1 activation in vitro (50–100 μmol/L) is similar to the behaviorally active concentration measured in mouse brain of approximately 106 μmol/L This is minimally 4 orders of magnitude higher than the average concentrations measured in rat brain tissue or human plasma under basal conditions (see Endogenous DMT), so σ1 receptors are likely to be activated only under conditions of high local DMT concentrations. If DMT is stored in synaptic vesicles, such concentrations might occur during vesicular release. To illustrate, while the average concentration of serotonin in brain tissue is in the 1.5–4 μmol/L range, the concentration of serotonin in synaptic vesicles was measured at 270 mM. Following vesicular release, the resulting concentration of serotonin in the synaptic cleft, to which serotonin receptors are exposed, is estimated to be about 300 μmol/L. Thus, while in vitro receptor binding affinities, efficacies, and average concentrations in tissue or plasma are useful, they are not likely to predict DMT concentrations in the vesicles or at synaptic or intracellular receptors. Under these conditions, notions of receptor selectivity are moot, and it seems probable that most of the receptors identified as targets for DMT (see above) participate in producing its psychedelic effects.
In September 2020, an in vitro and in vivo study found that DMT present in the ayahuasca infusion promotes neurogenesis, meaning it helps with generating neurons.
Pharmacokinetics
DMT peak level concentrations (Cmax) measured in whole blood after intramuscular (IM) injection (0.7 mg/kg, n = 11) and in plasma following intravenous (IV) administration (0.4 mg/kg, n = 10) of fully psychedelic doses are in the range of around 14 to 154 μg/L and 32 to 204 μg/L, respectively.
The corresponding molar concentrations of DMT are therefore in the range of 0.074–0.818 μmol/L in whole blood and 0.170–1.08 μmol in plasma. However, several studies have described active transport and accumulation of DMT into rat and dog brains following peripheral administration.
Similar active transport, and accumulation processes likely occur in human brains and may concentrate DMT in brain by several-fold or more (relatively to blood), resulting in local concentrations in the micromolar or higher range. Such concentrations would be commensurate with serotonin brain tissue concentrations, which have been consistently determined to be in the 1.5–4 μmol/L range.
Closely coextending with peak psychedelic effects, mean time to reach peak concentrations (Tmax) was determined to be 10–15 minutes in whole blood after IM injection, and 2 minutes in plasma after IV administration. When taken orally mixed in an ayahuasca decoction, and in freeze-dried ayahuasca gel caps, DMT Tmax is considerably delayed: 107.59 ± 32.5 minutes, and 90–120 minutes, respectively.
The pharmacokinetics for vaporizing DMT have not been studied or reported.
Due to its lipophilicity, DMT easily crosses the blood–brain barrier and enters the central nervous system.
Society and culture
Legal status
International law
Internationally DMT is illegal to possess without authorisation, exemption or license, but ayahuasca and DMT brews and preparations are lawful. DMT is controlled by the Convention on Psychotropic Substances at the international level. The Convention makes it illegal to possess, buy, purchase, sell, to retail and to dispense without a licence.
By continent and country
In some countries, ayahuasca is a forbidden or controlled or regulated substance, while in other countries it is not a controlled substance or its production, consumption, and sale, is allowed to various degrees.
Asia
Israel – DMT is an illegal substance; production, trade and possession are prosecuted as crimes.
India – DMT is illegal to produce, transport, trade in or possess with a minimum prison or jail punishment of ten years.
Europe
France – DMT, along with most of its plant sources, is classified as a stupéfiant (narcotic).
Germany – DMT is prohibited as a class I drug.
Republic of Ireland – DMT is an illegal Schedule 1 drug under the Misuse of Drugs Acts. An attempt in 2014 by a member of the Santo Daime church to gain a religious exemption to import the drug failed.
Latvia — DMT is prohibited as a Schedule I drug.
Netherlands – The drug is banned as it is classified as a List 1 Drug per the Opium Law. Production, trade and possession of DMT are prohibited.
Russia – Classified as a Schedule I narcotic, including its derivatives (see sumatriptan and zolmitriptan).
Serbia – DMT, along with stereoisomers and salts is classified as List 4 (Psychotropic substances) substance according to Act on Control of Psychoactive Substances.
Sweden – DMT is considered a Schedule 1 drug. The Swedish supreme court concluded in 2018 that possession of processed plant material containing a significant amount of DMT is illegal. However, possession of unprocessed such plant material was ruled legal.
United Kingdom – DMT is classified as a Class A drug.
Belgium – DMT cannot be possessed, sold, purchased or imported. Usage is not specifically prohibited, but since usage implies possession one could be prosecuted that way.
North America
Canada – DMT is classified as a Schedule III drug under the Controlled Drugs and Substances Act, but is legal for religious groups to use. In 2017 the Santo Daime Church Céu do Montréal received religious exemption to use ayahuasca as a sacrament in their rituals.
United States – DMT is classified in the United States as a Schedule I drug under the Controlled Substances Act of 1970.
In December 2004, the Supreme Court lifted a stay, thereby allowing the Brazil-based União do Vegetal (UDV) church to use a decoction containing DMT in their Christmas services that year. This decoction is a tea made from boiled leaves and vines, known as hoasca within the UDV, and ayahuasca in different cultures. In Gonzales v. O Centro Espírita Beneficente União do Vegetal, the Supreme Court heard arguments on 1 November 2005, and unanimously ruled in February 2006 that the U.S. federal government must allow the UDV to import and consume the tea for religious ceremonies under the 1993 Religious Freedom Restoration Act.
In September 2008, the three Santo Daime churches filed suit in federal court to gain legal status to import DMT-containing ayahuasca tea. The case, Church of the Holy Light of the Queen v. Mukasey, presided over by U.S. District Judge Owen M. Panner, was ruled in favor of the Santo Daime church. As of 21 March 2009, a federal judge says members of the church in Ashland can import, distribute and brew ayahuasca. Panner issued a permanent injunction barring the government from prohibiting or penalizing the sacramental use of "Daime tea". Panner's order said activities of The Church of the Holy Light of the Queen are legal and protected under freedom of religion. His order prohibits the federal government from interfering with and prosecuting church members who follow a list of regulations set out in his order.
Oceania
New Zealand – DMT is classified as a Class A drug under the Misuse of Drugs Act 1975.
Australia – DMT is listed as a Schedule 9 prohibited substance in Australia under the Poisons Standard (October 2015). A Schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO." Between 2011 and 2012, the Australian federal government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as "controlled plants". DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing mescaline or ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (Golden Wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native and/or religious people). Under the Misuse of Drugs Act 1981 6.0 g (3/16 oz) of DMT is considered enough to determine a court of trial and 2.0 g (1/16 oz) is considered intent to sell and supply.
Black market
Electronic cigarette cartridges filled with DMT started to be sold on the black market in 2018.
See also
Dimethyltryptamine-N-oxide
Psychedelic drug
List of psychoactive plants
MPMI
Serotonergic psychedelic
Psychoplastogen
Alexander Shulgin
SN-22
Rick Strassman
References
External links
DMT chapter from TiHKAL
5-HT2A agonists
Ayahuasca
Biased ligands
Dimethylamino compounds
Entheogens
Experimental antidepressants
Experimental anxiolytics
Experimental hallucinogens
Psychedelic tryptamines
Serotonin receptor agonists
Serotonin releasing agents
Sigma agonists
Tryptamine alkaloids | N,N-Dimethyltryptamine | [
"Chemistry"
] | 11,285 | [
"Biased ligands",
"Tryptamine alkaloids",
"Alkaloids by chemical classification",
"Signal transduction"
] |
8,758 | https://en.wikipedia.org/wiki/Douglas%20Hofstadter | Douglas Richard Hofstadter (born February 15, 1945) is an American cognitive and computer scientist whose research includes concepts such as the sense of self in relation to the external world, consciousness, analogy-making, strange loops, artificial intelligence, and discovery in mathematics and physics. His 1979 book Gödel, Escher, Bach: An Eternal Golden Braid won the Pulitzer Prize for general nonfiction, and a National Book Award (at that time called The American Book Award) for Science. His 2007 book I Am a Strange Loop won the Los Angeles Times Book Prize for Science and Technology.
Early life and education
Hofstadter was born in New York City to future Nobel Prize-winning physicist Robert Hofstadter and Nancy Givan Hofstadter. He grew up on the campus of Stanford University, where his father was a professor, and attended the International School of Geneva in 1958–59. He graduated with distinction in mathematics from Stanford University in 1965, and received his Ph.D. in physics from the University of Oregon in 1975, where his study of the energy levels of Bloch electrons in a magnetic field led to his discovery of the fractal known as Hofstadter's butterfly.
Academic career
Hofstadter was initially appointed to Indiana University's computer science department faculty in 1977, and at that time he launched his research program in computer modeling of mental processes (which he called "artificial intelligence research", a label he has since dropped in favor of "cognitive science research"). In 1984, he moved to the University of Michigan in Ann Arbor, where he was hired as a professor of psychology and was also appointed to the Walgreen Chair for the Study of Human Understanding.
In 1988, Hofstadter returned to IU as College of Arts and Sciences Professor in cognitive science and computer science. He was also appointed adjunct professor of history and philosophy of science, philosophy, comparative literature, and psychology, but has said that his involvement with most of those departments is nominal.
Since 1988, Hofstadter has been the College of Arts and Sciences Distinguished Professor of Cognitive Science and Comparative Literature at Indiana University in Bloomington, where he directs the Center for Research on Concepts and Cognition, which consists of himself and his graduate students, forming the "Fluid Analogies Research Group" (FARG). In 1988, he received the In Praise of Reason award, the Committee for Skeptical Inquiry's highest honor. In 2009, he was elected a Fellow of the American Academy of Arts and Sciences and became a member of the American Philosophical Society. In 2010, he was elected a member of the Royal Society of Sciences in Uppsala, Sweden.
Work and publications
At the University of Michigan and Indiana University, Hofstadter and Melanie Mitchell coauthored a computational model of "high-level perception"—Copycat—and several other models of analogy-making and cognition, including the Tabletop project, co-developed with Robert M. French. The Letter Spirit project, implemented by Gary McGraw and John Rehling, aims to model artistic creativity by designing stylistically uniform "gridfonts" (typefaces limited to a grid). Other more recent models include Phaeaco (implemented by Harry Foundalis) and SeqSee (Abhijit Mahabal), which model high-level perception and analogy-making in the microdomains of Bongard problems and number sequences, respectively, as well as George (Francisco Lara-Dammer), which models the processes of perception and discovery in triangle geometry.
Hofstadter's thesis about consciousness, first expressed in Gödel, Escher, Bach but also present in several of his later books, is that it is "an emergent consequence of seething lower-level activity in the brain." In Gödel, Escher, Bach he draws an analogy between the social organization of a colony of ants and the mind seen as a coherent "colony" of neurons. In particular, Hofstadter claims that our sense of having (or being) an "I" comes from the abstract pattern he terms a "strange loop", an abstract cousin of such concrete phenomena as audio and video feedback that Hofstadter has defined as "a level-crossing feedback loop". The prototypical example of a strange loop is the self-referential structure at the core of Gödel's incompleteness theorems. Hofstadter's 2007 book I Am a Strange Loop carries his vision of consciousness considerably further, including the idea that each human "I" is distributed over numerous brains, rather than being limited to one. Le Ton beau de Marot: In Praise of the Music of Language is a long book devoted to language and translation, especially poetry translation, and one of its leitmotifs is a set of 88 translations of "Ma Mignonne", a highly constrained poem by 16th-century French poet Clément Marot. In this book, Hofstadter jokingly describes himself as "pilingual" (meaning that the sum total of the varying degrees of mastery of all the languages that he has studied comes to 3.14159 ...), as well as an "oligoglot" (someone who speaks "a few" languages).
In 1999, the bicentennial year of the Russian poet and writer Alexander Pushkin, Hofstadter published a verse translation of Pushkin's classic novel-in-verse Eugene Onegin. He has translated other poems and two novels: La Chamade (That Mad Ache) by Françoise Sagan, and La Scoperta dell'Alba (The Discovery of Dawn) by Walter Veltroni, the then-head of the Partito Democratico in Italy. The Discovery of Dawn was published in 2007, and That Mad Ache was published in 2009, bound together with Hofstadter's essay "Translator, Trader: An Essay on the Pleasantly Pervasive Paradoxes of Translation".
Hofstadter's Law
Hofstadter's Law is "It always takes longer than you expect, even when you take into account Hofstadter's Law." The law is stated in Gödel, Escher, Bach.
Students
Hofstadter's former Ph.D. students include (with dissertation title):
David ChalmersToward a Theory of Consciousness
Bob FrenchTabletop: An Emergent, Stochastic Model of Analogy-Making
Gary McGrawLetter Spirit (Part One): Emergent High-level Perception of Letters Using Fluid Concepts
Melanie MitchellCopycat: A Computer Model of High-Level Perception and Conceptual Slippage in Analogy-making
Public image
Hofstadter has said that he feels "uncomfortable with the nerd culture that centers on computers". He admits that "a large fraction [of his audience] seems to be those who are fascinated by technology", but when it was suggested that his work "has inspired many students to begin careers in computing and artificial intelligence" he replied that he was pleased about that, but that he himself has "no interest in computers". In that interview he also mentioned a course he has twice given at Indiana University, in which he took a "skeptical look at a number of highly touted AI projects and overall approaches". For example, upon the defeat of Garry Kasparov by Deep Blue, he commented: "It was a watershed event, but it doesn't have to do with computers becoming intelligent." In his book Metamagical Themas, he says that "in this day and age, how can anyone fascinated by creativity and beauty fail to see in computers the ultimate tool for exploring their essence?"
In 1988, Dutch director Piet Hoenderdos created a docudrama about Hofstadter and his ideas, Victim of the Brain, based on The Mind's I. It includes interviews with Hofstadter about his work.
Provoked by predictions of a technological singularity (a hypothetical moment in the future of humanity when a self-reinforcing, runaway development of artificial intelligence causes a radical change in technology and culture), Hofstadter has both organized and participated in several public discussions of the topic. At Indiana University in 1999 he organized such a symposium, and in April 2000, he organized a larger symposium titled "Spiritual Robots" at Stanford University, in which he moderated a panel consisting of Ray Kurzweil, Hans Moravec, Kevin Kelly, Ralph Merkle, Bill Joy, Frank Drake, John Holland and John Koza. Hofstadter was also an invited panelist at the first Singularity Summit, held at Stanford in May 2006. Hofstadter expressed doubt that the singularity will occur in the foreseeable future.
In 2023, Hofstadter said that rapid progress in AI made some of his "core beliefs" about the limitations of AI "collapse". Hinting at an AI takeover, he added that human beings may soon be eclipsed by "something else that is far more intelligent and will become incomprehensible to us".
Columnist
When Martin Gardner retired from writing his "Mathematical Games" column for Scientific American magazine, Hofstadter succeeded him in 1981–83 with a column titled Metamagical Themas (an anagram of "Mathematical Games"). An idea he introduced in one of these columns was the concept of "Reviews of This Book", a book containing nothing but cross-referenced reviews of itself that has an online implementation. One of Hofstadter's columns in Scientific American concerned the damaging effects of sexist language, and two chapters of his book Metamagical Themas are devoted to that topic, one of which is a biting analogy-based satire, "A Person Paper on Purity in Language" (1985), in which the reader's presumed revulsion at racism and racist language is used as a lever to motivate an analogous revulsion at sexism and sexist language; Hofstadter published it under the pseudonym William Satire, an allusion to William Safire. Another column reported on the discoveries made by University of Michigan professor Robert Axelrod in his computer tournament pitting many iterated prisoner's dilemma strategies against each other, and a follow-up column discussed a similar tournament that Hofstadter and his graduate student Marek Lugowski organized. The "Metamagical Themas" columns ranged over many themes, including patterns in Frédéric Chopin's piano music (particularly his études), the concept of superrationality (choosing to cooperate when the other party/adversary is assumed to be equally intelligent as oneself), and the self-modifying game of Nomic, based on the way the legal system modifies itself, and developed by philosopher Peter Suber.
Personal life
Hofstadter was married to Carol Ann Brush until her death. They met in Bloomington, and married in Ann Arbor in 1985. They had two children. Carol died in 1993 from the sudden onset of a brain tumor, glioblastoma multiforme, when their children were young. The Carol Ann Brush Hofstadter Memorial Scholarship for Bologna-bound Indiana University students was established in 1996 in her name. Hofstadter's book Le Ton beau de Marot is dedicated to their two children and its dedication reads "To M. & D., living sparks of their Mommy's soul". In 2010, Hofstadter met Baofen Lin in a cha-cha-cha class, and they married in Bloomington in September 2012.
Hofstadter has composed pieces for piano and for piano and voice. He created an audio CD, DRH/JJ, of these compositions performed mostly by pianist Jane Jackson, with a few performed by Brian Jones, Dafna Barenboim, Gitanjali Mathur, and Hofstadter. The dedication for I Am A Strange Loop is: "To my sister Laura, who can understand, and to our sister Molly, who cannot." Hofstadter explains in the preface that his younger sister Molly never developed the ability to speak or understand language. As a consequence of his attitudes about consciousness and empathy, Hofstadter became a vegetarian in his teenage years, and has remained primarily so since that time.
In popular culture
In the 1982 novel 2010: Odyssey Two, Arthur C. Clarke's first sequel to 2001: A Space Odyssey, HAL 9000 is described by the character "Dr. Chandra" as being caught in a "Hofstadter–Möbius loop". The movie uses the term "H. Möbius loop". On April 3, 1995, Hofstadter's book Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought was the first book sold by Amazon.com. Michael R. Jackson's musical A Strange Loop makes reference to Hofstadter's concept and the title of his 2007 book.
Published works
Books
The books published by Hofstadter are (the ISBNs refer to paperback editions, where available):
Gödel, Escher, Bach: an Eternal Golden Braid () (1979)
Metamagical Themas () (collection of Scientific American columns and other essays, all with postscripts) (1985)
Ambigrammi: un microcosmo ideale per lo studio della creatività () (in Italian only)
Fluid Concepts and Creative Analogies (co-authored with several of Hofstadter's graduate students) ()
Rhapsody on a Theme by Clement Marot () (1995, published 1996; volume 16 of series The Grace A. Tanner Lecture in Human Values)
Le Ton beau de Marot: In Praise of the Music of Language ()
I Am a Strange Loop () (2007)
Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, co-authored with Emmanuel Sander () (first published in French as L'Analogie. Cœur de la pensée; published in English in the U.S. in April 2013)
Involvement in other books
Hofstadter has written forewords for or edited the following books:
The Mind's I: Fantasies and Reflections on Self and Soul (co-edited with Daniel Dennett), 1981. (, ) and ()
Inversions, by Scott Kim, 1981. (Foreword) ()
Alan Turing: The Enigma by Andrew Hodges, 1983. (Preface)
Sparse Distributed Memory by Pentti Kanerva, Bradford Books/MIT Press, 1988. (Foreword) ()
Are Quanta Real? A Galilean Dialogue by J.M. Jauch, Indiana University Press, 1989. (Foreword) ()
Gödel's Proof (2002 revised edition) by Ernest Nagel and James R. Newman, edited by Hofstadter. In the foreword, Hofstadter explains that the book (originally published in 1958) exerted a profound influence on him when he was young. ()
Who Invented the Computer? The Legal Battle That Changed Computing History by Alice Rowe Burks, 2003. (Foreword)
Alan Turing: Life and Legacy of a Great Thinker by Christof Teuscher, 2003. (editor)
Brainstem Still Life by Jason Salavon, 2004. (Introduction) ()
Masters of Deception: Escher, Dalí & the Artists of Optical Illusion by Al Seckel, 2004. (Foreword)
King of Infinite Space: Donald Coxeter, the Man Who Saved Geometry by Siobhan Roberts, Walker and Company, 2006. (Foreword)
Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science by Karl Sigmund, Basic Books, 2017. Hofstadter wrote the foreword and helped with the translation.
To Light the Flame of Reason: Clear Thinking for the Twenty-First Century by Christopher Sturmark, Prometheus, 2022. (Foreword and Contributions)
Translations
Eugene Onegin: A Novel Versification from the Russian original of Alexander Pushkin, 1999. ()
The Discovery of Dawn from the Italian original of Walter Veltroni, 2007. ()
That Mad Ache, co-bound with Translator, Trader: An Essay on the Pleasantly Pervasive Paradoxes of Translation, from the French original La chamade of Francoise Sagan), 2009. ()
See also
American philosophy
BlooP and FlooP
Egbert B. Gebstadter
Hofstadter points
Hofstadter's butterfly
Hofstadter's law
List of American philosophers
Platonia dilemma
Superrationality
Notes
References
External links
Stanford University Presidential Lecture – site dedicated to Hofstadter and his work
"The Man Who Would Teach Machines to Think" by James Somers, The Atlantic, November 2013 issue
Profile at Resonance Publications
NF Reviews – bibliographic page with reviews of several of Hofstadter's books
"Autoportrait with Constraint" – a short autobiography in the form of a lipogram
GitHub repo of sourcecode & literature of Hofstadter's students work
Douglas Hofstadter on the Literature Map
1945 births
Living people
20th-century American writers
20th-century American philosophers
21st-century American poets
21st-century American philosophers
21st-century American translators
American science writers
Mathematics popularizers
American skeptics
Fellows of the American Academy of Arts and Sciences
Indiana University faculty
National Book Award winners
Palo Alto High School alumni
Academics from Palo Alto, California
Scientists from Palo Alto, California
American philosophers of mind
Pulitzer Prize for General Nonfiction winners
Recreational mathematicians
Stanford University alumni
Translators of Alexander Pushkin
University of Michigan faculty
University of Oregon alumni
Fellows of the Cognitive Science Society
21st-century American non-fiction writers
International School of Geneva alumni | Douglas Hofstadter | [
"Mathematics"
] | 3,610 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
8,777 | https://en.wikipedia.org/wiki/DNA%20virus | A DNA virus is a virus that has a genome made of deoxyribonucleic acid (DNA) that is replicated by a DNA polymerase. They can be divided between those that have two strands of DNA in their genome, called double-stranded DNA (dsDNA) viruses, and those that have one strand of DNA in their genome, called single-stranded DNA (ssDNA) viruses. dsDNA viruses primarily belong to two realms: Duplodnaviria and Varidnaviria, and ssDNA viruses are almost exclusively assigned to the realm Monodnaviria, which also includes some dsDNA viruses. Additionally, many DNA viruses are unassigned to higher taxa. Reverse transcribing viruses, which have a DNA genome that is replicated through an RNA intermediate by a reverse transcriptase, are classified into the kingdom Pararnavirae in the realm Riboviria.
DNA viruses are ubiquitous worldwide, especially in marine environments where they form an important part of marine ecosystems, and infect both prokaryotes and eukaryotes. They appear to have multiple origins, as viruses in Monodnaviria appear to have emerged from archaeal and bacterial plasmids on multiple occasions, though the origins of Duplodnaviria and Varidnaviria are less clear.
Prominent disease-causing DNA viruses include herpesviruses, papillomaviruses, and poxviruses.
Baltimore classification
The Baltimore classification system is used to group viruses together based on their manner of messenger RNA (mRNA) synthesis and is often used alongside standard virus taxonomy, which is based on evolutionary history. DNA viruses constitute two Baltimore groups: Group I: double-stranded DNA viruses, and Group II: single-stranded DNA viruses. While Baltimore classification is chiefly based on transcription of mRNA, viruses in each Baltimore group also typically share their manner of replication. Viruses in a Baltimore group do not necessarily share genetic relation or morphology.
Double-stranded DNA viruses
The first Baltimore group of DNA viruses are those that have a double-stranded DNA genome. All dsDNA viruses have their mRNA synthesized in a three-step process. First, a transcription preinitiation complex binds to the DNA upstream of the site where transcription begins, allowing for the recruitment of a host RNA polymerase. Second, once the RNA polymerase is recruited, it uses the negative strand as a template for synthesizing mRNA strands. Third, the RNA polymerase terminates transcription upon reaching a specific signal, such as a polyadenylation site.
dsDNA viruses make use of several mechanisms to replicate their genome. Bidirectional replication, in which two replication forks are established at a replication origin site and move in opposite directions of each other, is widely used. A rolling circle mechanism that produces linear strands while progressing in a loop around the circular genome is also common. Some dsDNA viruses use a strand displacement method whereby one strand is synthesized from a template strand, and a complementary strand is then synthesized from the prior synthesized strand, forming a dsDNA genome. Lastly, some dsDNA viruses are replicated as part of a process called replicative transposition whereby a viral genome in a host cell's DNA is replicated to another part of a host genome.
dsDNA viruses can be subdivided between those that replicate in the cell nucleus, and as such are relatively dependent on host cell machinery for transcription and replication, and those that replicate in the cytoplasm, in which case they have evolved or acquired their own means of executing transcription and replication. dsDNA viruses are also commonly divided between tailed dsDNA viruses, referring to members of the realm Duplodnaviria, usually the tailed bacteriophages of the order Caudovirales, and tailless or non-tailed dsDNA viruses of the realm Varidnaviria.
Single-stranded DNA viruses
The second Baltimore group of DNA viruses are those that have a single-stranded DNA genome. ssDNA viruses have the same manner of transcription as dsDNA viruses. However, because the genome is single-stranded, it is first made into a double-stranded form by a DNA polymerase upon entering a host cell. mRNA is then synthesized from the double-stranded form. The double-stranded form of ssDNA viruses may be produced either directly after entry into a cell or as a consequence of replication of the viral genome. Eukaryotic ssDNA viruses are replicated in the nucleus.
Most ssDNA viruses contain circular genomes that are replicated via rolling circle replication (RCR). ssDNA RCR is initiated by an endonuclease that bonds to and cleaves the positive strand, allowing a DNA polymerase to use the negative strand as a template for replication. Replication progresses in a loop around the genome by means of extending the 3'-end of the positive strand, displacing the prior positive strand, and the endonuclease cleaves the positive strand again to create a standalone genome that is ligated into a circular loop. The new ssDNA may be packaged into virions or replicated by a DNA polymerase to form a double-stranded form for transcription or continuation of the replication cycle.
Parvoviruses contain linear ssDNA genomes that are replicated via rolling hairpin replication (RHR), which is similar to RCR. Parvovirus genomes have hairpin loops at each end of the genome that repeatedly unfold and refold during replication to change the direction of DNA synthesis to move back and forth along the genome, producing numerous copies of the genome in a continuous process. Individual genomes are then excised from this molecule by the viral endonuclease. For parvoviruses, either the positive or negative sense strand may be packaged into capsids, varying from virus to virus.
Nearly all ssDNA viruses have positive sense genomes, but a few exceptions and peculiarities exist. The family Anelloviridae is the only ssDNA family whose members have negative sense genomes, which are circular. Parvoviruses, as previously mentioned, may package either the positive or negative sense strand into virions. Lastly, bidnaviruses package both the positive and negative linear strands.
ICTV classification
The International Committee on Taxonomy of Viruses (ICTV) oversees virus taxonomy and organizes viruses at the basal level at the rank of realm. Virus realms correspond to the rank of domain used for cellular life but differ in that viruses within a realm do not necessarily share common ancestry, nor do the realms share common ancestry with each other. As such, each virus realm represents at least one instance of viruses coming into existence. Within each realm, viruses are grouped together based on shared characteristics that are highly conserved over time. Three DNA virus realms are recognized: Duplodnaviria, Monodnaviria, and Varidnaviria.
Duplodnaviria
Duplodnaviria contains dsDNA viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
Duplodnaviria is a very ancient realm, perhaps predating the last universal common ancestor (LUCA) of cellular life. Its origins not known, nor whether it is monophyletic or polyphyletic. A characteristic feature is the HK97-fold found in the MCP of all members, which is found outside the realm only in encapsulins, a type of nanocompartment found in bacteria: this relation is not fully understood.
The relation between caudoviruses and herpesviruses is also uncertain: they may share a common ancestor or herpesviruses may be a divergent clade from the realm Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
Monodnaviria
Monodnaviria contains ssDNA viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
Viruses in Monodnaviria appear to have emerged on multiple occasions from archaeal and bacterial plasmids, a type of extra-chromosomal DNA molecule that self-replicates inside its host. The kingdom Shotokuvirae in the realm likely emerged from recombination events that merged the DNA of these plasmids and complementary DNA encoding the capsid proteins of RNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
Varidnaviria
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses can become endogenized in their host's genome; a peculiar example are virophages, which after infecting a host, can protect the host against giant viruses.
Baltimore classification
dsDNA viruses are classified into three realms and include many taxa that are unassigned to a realm:
All viruses in Duplodnaviria are dsDNA viruses.
In Monodnaviria, members of the class Papovaviricetes are dsDNA viruses.
All viruses in Varidnaviria are dsDNA viruses.
The following taxa that are unassigned to a realm exclusively contain dsDNA viruses:
Orders: Ligamenvirales
Families: Ampullaviridae, Baculoviridae, Bicaudaviridae, Clavaviridae, Fuselloviridae, Globuloviridae, Guttaviridae, Halspiviridae, Hytrosaviridae, Nimaviridae, Nudiviridae, Ovaliviridae, Plasmaviridae, Polydnaviridae, Portogloboviridae, Thaspiviridae, Tristromaviridae
Genera: Dinodnavirus, Rhizidiovirus
ssDNA viruses are classified into one realm and include several families that are unassigned to a realm:
In Monodnaviria, all members except viruses in Papovaviricetes are ssDNA viruses.
The unassigned families Anelloviridae and Spiraviridae are ssDNA virus families.
Viruses in the family Finnlakeviridae contain ssDNA genomes. Finnlakeviridae is unassigned to a realm but is a proposed member of Varidnaviria.
References
Bibliography
DNA | DNA virus | [
"Biology"
] | 2,731 | [
"Viruses",
"DNA viruses"
] |
8,807 | https://en.wikipedia.org/wiki/Dehydroepiandrosterone | Dehydroepiandrosterone (DHEA), also known as androstenolone, is an endogenous steroid hormone precursor. It is one of the most abundant circulating steroids in humans. DHEA is produced in the adrenal glands, the gonads, and the brain. It functions as a metabolic intermediate in the biosynthesis of the androgen and estrogen sex steroids both in the gonads and in various other tissues. However, DHEA also has a variety of potential biological effects in its own right, binding to an array of nuclear and cell surface receptors, and acting as a neurosteroid and modulator of neurotrophic factor receptors.
In the United States, DHEA is sold as an over-the-counter supplement, and medication called prasterone.
Biological function
As an androgen
DHEA and other adrenal androgens such as androstenedione, although relatively weak androgens, are responsible for the androgenic effects of adrenarche, such as early pubic and axillary hair growth, adult-type body odor, increased oiliness of hair and skin, and mild acne. DHEA is potentiated locally via conversion into testosterone and dihydrotestosterone (DHT) in the skin and hair follicles. Women with complete androgen insensitivity syndrome (CAIS), who have a non-functional androgen receptor (AR) and are immune to the androgenic effects of DHEA and other androgens, have absent or only sparse/scanty pubic and axillary hair and body hair in general, demonstrating the role of DHEA and other androgens in body hair development at both adrenarche and pubarche.
As an estrogen
DHEA is a weak estrogen. In addition, it is transformed into potent estrogens such as estradiol in certain tissues such as the vagina, and thereby produces estrogenic effects in such tissues.
As a neurosteroid
As a neurosteroid and neurotrophin, DHEA has important effects in the central nervous system.
Biological activity
Hormonal activity
Androgen receptor
Although it functions as an endogenous precursor to more potent androgens such as testosterone and DHT, DHEA has been found to possess some degree of androgenic activity in its own right, acting as a low affinity (Ki = 1 μM), weak partial agonist of the androgen receptor (AR). However, its intrinsic activity at the receptor is quite weak, and on account of that, due to competition for binding with full agonists like testosterone, it can actually behave more like an antagonist depending on circulating testosterone and dihydrotestosterone (DHT) levels, and hence, like an antiandrogen. However, its affinity for the receptor is very low, and for that reason, is unlikely to be of much significance under normal circumstances.
Estrogen receptors
In addition to its affinity for the androgen receptor, DHEA has also been found to bind to (and activate) the ERα and ERβ estrogen receptors with Ki values of 1.1 μM and 0.5 μM, respectively, and EC50 values of >1 μM and 200 nM, respectively. Though it was found to be a partial agonist of the ERα with a maximal efficacy of 30–70%, the concentrations required for this degree of activation make it unlikely that the activity of DHEA at this receptor is physiologically meaningful. Remarkably however, DHEA acts as a full agonist of the ERβ with a maximal response similar to or actually slightly greater than that of estradiol, and its levels in circulation and local tissues in the human body are high enough to activate the receptor to the same degree as that seen with circulating estradiol levels at somewhat higher than their maximal, non-ovulatory concentrations; indeed, when combined with estradiol with both at levels equivalent to those of their physiological concentrations, overall activation of the ERβ was doubled.
Other nuclear receptors
DHEA does not bind to or activate the progesterone, glucocorticoid, or mineralocorticoid receptors. Other nuclear receptor targets of DHEA besides the androgen and estrogen receptors include the PPARα, PXR, and CAR. However, whereas DHEA is a ligand of the PPARα and PXR in rodents, it is not in humans. In addition to direct interactions, DHEA is thought to regulate a handful of other proteins via indirect, genomic mechanisms, including the enzymes CYP2C11 and 11β-HSD1 – the latter of which is essential for the biosynthesis of the glucocorticoids such as cortisol and has been suggested to be involved in the antiglucocorticoid effects of DHEA – and the carrier protein IGFBP1.
Neurosteroid activity
Neurotransmitter receptors
DHEA has been found to directly act on several neurotransmitter receptors, including acting as a positive allosteric modulator of the NMDA receptor, as a negative allosteric modulator of the GABAA receptor, and as an agonist of the σ1 receptor.
Neurotrophin receptors
In 2011, the surprising discovery was made that DHEA, as well as its sulfate ester, DHEA-S, directly bind to and activate TrkA and p75NTR, receptors of neurotrophins like nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF), with high affinity. DHEA was subsequently also found to bind to TrkB and TrkC with high affinity, though it only activated TrkC not TrkB. DHEA and DHEA-S bound to these receptors with affinities in the low nanomolar range (around 5 nM), which were nonetheless approximately two orders of magnitude lower relative to highly potent polypeptide neurotrophins like NGF (0.01–0.1 nM). In any case, DHEA and DHEA-S both circulate at requisite concentrations to activate these receptors and were thus identified as important endogenous neurotrophic factors. They have since been labeled "steroidal microneurotrophins", due to their small-molecule and steroidal nature relative to their polypeptide neurotrophin counterparts. Subsequent research has suggested that DHEA and/or DHEA-S may in fact be phylogenetically ancient "ancestral" ligands of the neurotrophin receptors from early on in the evolution of the nervous system. The findings that DHEA binds to and potently activates neurotrophin receptors may explain the positive association between decreased circulating DHEA levels with age and age-related neurodegenerative diseases.
Microtubule-associated protein 2
Similarly to pregnenolone, its synthetic derivative 3β-methoxypregnenolone (MAP-4343), and progesterone, DHEA has been found to bind to microtubule-associated protein 2 (MAP2), specifically the MAP2C subtype (Kd = 27 μM). However, it is unclear whether DHEA increases binding of MAP2 to tubulin like pregnenolone.
ADHD
Some research has shown that DHEA levels are too low in people with ADHD, and treatment with methylphenidate or bupropion (stimulant type of medications) normalizes DHEA levels.
Other activity
G6PDH inhibitor
DHEA is an uncompetitive inhibitor of (Ki = 17 μM; IC50 = 18.7 μM), and is able to lower levels and reduce NADPH-dependent free radical production. It is thought that this action may possibly be responsible for much of the antiinflammatory, antihyperplastic, chemopreventative, antihyperlipidemic, antidiabetic, and antiobesic, as well as certain immunomodulating activities of DHEA (with some experimental evidence to support this notion available). However, it has also been said that inhibition of G6PDH activity by DHEA in vivo has not been observed and that the concentrations required for DHEA to inhibit G6PDH in vitro are very high, thus making the possible contribution of G6PDH inhibition to the effects of DHEA uncertain.
Cancer
DHEA supplements have been promoted in supplement form for its claimed cancer prevention properties; there is no scientific evidence to support these claims.
Miscellaneous
DHEA has been found to competitively inhibit TRPV1.
Biochemistry
Biosynthesis
DHEA is produced in the zona reticularis of the adrenal cortex under the control of adrenocorticotropic hormone (ACTH) and by the gonads under the control of gonadotropin-releasing hormone (GnRH). It is also produced in the brain. DHEA is synthesized from cholesterol via the enzymes cholesterol side-chain cleavage enzyme (CYP11A1; P450scc) and 17α-hydroxylase/17,20-lyase (CYP17A1), with pregnenolone and 17α-hydroxypregnenolone as intermediates. It is derived mostly from the adrenal cortex, with only about 10% being secreted from the gonads. Approximately 50 to 70% of circulating DHEA originates from desulfation of DHEA-S in peripheral tissues. DHEA-S itself originates almost exclusively from the adrenal cortex, with 95 to 100% being secreted from the adrenal cortex in women.
Increasing endogenous production
Regular exercise is known to increase DHEA production in the body. Calorie restriction has also been shown to increase DHEA in primates. Some theorize that the increase in endogenous DHEA brought about by calorie restriction is partially responsible for the longer life expectancy known to be associated with calorie restriction.
Distribution
In the circulation, DHEA is mainly bound to albumin, with a small amount bound to sex hormone-binding globulin (SHBG). The small remainder of DHEA not associated with albumin or SHBG is unbound and free in the circulation.
DHEA easily crosses the blood–brain barrier into the central nervous system.
Metabolism
DHEA is transformed into DHEA-S by sulfation at the C3β position via the sulfotransferase enzymes SULT2A1 and to a lesser extent SULT1E1. This occurs naturally in the adrenal cortex and during first-pass metabolism in the liver and intestines when exogenous DHEA is administered orally. Levels of DHEA-S in circulation are approximately 250 to 300 times those of DHEA. DHEA-S in turn can be converted back into DHEA in peripheral tissues via steroid sulfatase (STS).
The terminal half-life of DHEA is short at only 15 to 30 minutes. In contrast, the terminal half-life of DHEA-S is far longer, at 7 to 10 hours. As DHEA-S can be converted back into DHEA, it serves as a circulating reservoir for DHEA, thereby extending the duration of DHEA.
Metabolites of DHEA include DHEA-S, 7α-hydroxy-DHEA, 7β-hydroxy-DHEA, 7-keto-DHEA, 7α-hydroxyepiandrosterone, and 7β-hydroxyepiandrosterone, as well as androstenediol and androstenedione.
Pregnancy
During pregnancy, DHEA-S is metabolized into the sulfates of 16α-hydroxy-DHEA and 15α-hydroxy-DHEA in the fetal liver as intermediates in the production of the estrogens estriol and estetrol, respectively.
Levels
Prior to puberty in humans, DHEA and DHEA-S levels elevate upon differentiation of the zona reticularis of the adrenal cortex. Peak levels of DHEA and DHEA-S are observed around age 20, which is followed by an age-dependent decline throughout life eventually back to prepubertal concentrations. Plasma levels of DHEA in adult men are 10 to 25 nM, in premenopausal women are 5 to 30 nM, and in postmenopausal women are 2 to 20 nM. Conversely, DHEA-S levels are an order of magnitude higher at 1–10 μM. Levels of DHEA and DHEA-S decline to the lower nanomolar and micromolar ranges in men and women aged 60 to 80 years.
DHEA levels are as follows:
Adult men: 180–1250 ng/dL
Adult women: 130–980 ng/dL
Pregnant women: 135–810 ng/dL
Prepubertal children (<1 year): 26–585 ng/dL
Prepubertal children (1–5 years): 9–68 ng/dL
Prepubertal children (6–12 years): 11–186 ng/dL
Adolescent boys (Tanner II–III): 25–300 ng/dL
Adolescent girls (Tanner II–III): 69–605 ng/dL
Adolescent boys (Tanner IV–V): 100–400 ng/dL
Adolescent girls (Tanner IV–V): 165–690 ng/dL
Measurement
As almost all DHEA is derived from the adrenal glands, blood measurements of DHEA-S/DHEA are useful to detect excess adrenal activity as seen in adrenal cancer or hyperplasia, including certain forms of congenital adrenal hyperplasia. Women with polycystic ovary syndrome tend to have elevated levels of DHEA-S.
Chemistry
DHEA, also known as androst-5-en-3β-ol-17-one, is a naturally occurring androstane steroid and a 17-ketosteroid. It is closely related structurally to androstenediol (androst-5-ene-3β,17β-diol), androstenedione (androst-4-ene-3,17-dione), and testosterone (androst-4-en-17β-ol-3-one). DHEA is the 5-dehydro analogue of epiandrosterone (5α-androstan-3β-ol-17-one) and is also known as 5-dehydroepiandrosterone or as δ5-epiandrosterone.
Isomers
The term "dehydroepiandrosterone" is ambiguous chemically because it does not include the specific positions within epiandrosterone at which hydrogen atoms are missing. DHEA itself is 5,6-didehydroepiandrosterone or 5-dehydroepiandrosterone. A number of naturally occurring isomers also exist and may have similar activities. Some isomers of DHEA are 1-dehydroepiandrosterone (1-androsterone) and 4-dehydroepiandrosterone. These isomers are also technically "DHEA", since they are dehydroepiandrosterones in which hydrogens are removed from the epiandrosterone skeleton.
Dehydroandrosterone (DHA) is the 3α-epimer of DHEA and is also an endogenous androgen.
History
DHEA was first isolated from human urine in 1934 by Adolf Butenandt and Kurt Tscherning.
See also
Epigenetic clock
References
Further reading
Anabolic–androgenic steroids
Androstanes
Estrogens
Hormones of the hypothalamus-pituitary-gonad axis
GABAA receptor negative allosteric modulators
Neurosteroids
NMDA receptor agonists
Pheromones
Pregnane X receptor agonists
Sex hormones
Sigma agonists
Muscle protectors
Muscle stabilizers | Dehydroepiandrosterone | [
"Chemistry",
"Biology"
] | 3,374 | [
"Behavior",
"Chemical ecology",
"Sex hormones",
"Pheromones",
"Sexuality"
] |
8,811 | https://en.wikipedia.org/wiki/Discrete%20Fourier%20transform | In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
The DFT is used in the Fourier analysis of many practical applications. In digital signal processing, the function is any quantity or signal that varies over time, such as the pressure of a sound wave, a radio signal, or daily temperature readings, sampled over a finite time interval (often defined by a window function). In image processing, the samples can be the values of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers.
Since it deals with a finite amount of data, it can be implemented in computers by numerical algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier transform (FFT) algorithms; so much so that the terms "FFT" and "DFT" are often used interchangeably. Prior to its current usage, the "FFT" initialism may have also been used for the ambiguous term "finite Fourier transform".
Definition
The discrete Fourier transform transforms a sequence of N complex numbers into another sequence of complex numbers, which is defined by:
The transform is sometimes denoted by the symbol , as in or or .
can be interpreted or derived in various ways, for example: can also be evaluated outside the domain , and that extended sequence is -periodic. Accordingly, other sequences of indices are sometimes used, such as (if is even) and (if is odd), which amounts to swapping the left and right halves of the result of the transform.
The inverse transform is given by:
. is also -periodic (in index n). In , each is a complex number whose polar coordinates are the amplitude and phase of a complex sinusoidal component of function (see Discrete Fourier series) The sinusoid's frequency is cycles per samples.
The normalization factor multiplying the DFT and IDFT (here 1 and ) and the signs of the exponents are the most common conventions. The only actual requirements of these conventions are that the DFT and IDFT have opposite-sign exponents and that the product of their normalization factors be An uncommon normalization of for both the DFT and IDFT makes the transform-pair unitary.
Example
This example demonstrates how to apply the DFT to a sequence of length and the input vector
Calculating the DFT of using
results in
Properties
Linearity
The DFT is a linear transform, i.e. if and , then for any complex numbers :
Time and frequency reversal
Reversing the time (i.e. replacing by ) in corresponds to reversing the frequency (i.e. by ). Mathematically, if represents the vector x then
if
then
Conjugation in time
If then .
Real and imaginary part
This table shows some mathematical operations on in the time domain and the corresponding effects on its DFT in the frequency domain.
Orthogonality
The vectors form an orthogonal basis over the set of N-dimensional complex vectors:
where is the Kronecker delta. (In the last step, the summation is trivial if , where it is and otherwise is a geometric series that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below.
The Plancherel theorem and Parseval's theorem
If and are the DFTs of and respectively then Parseval's theorem states:
where the star denotes complex conjugation. The Plancherel theorem is a special case of Parseval's theorem and states:
These theorems are also equivalent to the unitary condition below.
Periodicity
The periodicity can be shown directly from the definition:
Similarly, it can be shown that the IDFT formula leads to a periodic extension.
Shift theorem
Multiplying by a linear phase for some integer m corresponds to a circular shift of the output : is replaced by , where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular shift of the input corresponds to multiplying the output by a linear phase. Mathematically, if represents the vector x then
if
then
and
Circular convolution theorem and cross-correlation theorem
The convolution theorem for the discrete-time Fourier transform (DTFT) indicates that a convolution of two sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when one of sequences is N-periodic, denoted here by because is non-zero at only discrete frequencies (see ), and therefore so is its product with the continuous function That leads to a considerable simplification of the inverse transform.
where is a periodic summation of the sequence:
Customarily, the DFT and inverse DFT summations are taken over the domain . Defining those DFTs as and , the result is:
In practice, the sequence is usually length N or less, and is a periodic extension of an N-length -sequence, which can also be expressed as a circular function:
Then the convolution can be written as:
which gives rise to the interpretation as a circular convolution of and It is often used to efficiently compute their linear convolution. (see Circular convolution, Fast convolution algorithms, and Overlap-save)
Similarly, the cross-correlation of and is given by:
Uniqueness of the Discrete Fourier Transform
As seen above, the discrete Fourier transform has the fundamental property of carrying convolution into componentwise product. A natural question is whether it is the only one with this ability. It has been shown that any linear transform that turns convolution into pointwise product is the DFT up to a permutation of coefficients. Since the number of permutations of n elements equals n!, there exists exactly n! linear and invertible maps with the same fundamental property as the DFT with respect to convolution.
Convolution theorem duality
It can also be shown that:
which is the circular convolution of and .
Trigonometric interpolation polynomial
The trigonometric interpolation polynomial
where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property for .
For even N, notice that the Nyquist component is handled specially.
This interpolation is not unique: aliasing implies that one could add N to any of the complex-sinusoid frequencies (e.g. changing to ) without changing the interpolation property, but giving different values in between the points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the are real numbers, then is real as well.
In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to (instead of roughly to as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real ; its use is a common mistake.
The unitary DFT
Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as the DFT matrix, a Vandermonde matrix,
introduced by Sylvester in 1867,
where is a primitive Nth root of unity.
For example, in the case when , , and
(which is a Hadamard matrix) or when as in the above, , and
The inverse transform is then given by the inverse of the above matrix,
With unitary normalization constants , the DFT becomes a unitary transformation, defined by a unitary matrix:
where is the determinant function. The determinant is the product of the eigenvalues, which are always or as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT.
The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):
If X is defined as the unitary DFT of the vector x, then
and the Parseval's theorem is expressed as
If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case , this implies that the length of a vector is preserved as well — this is just Plancherel theorem,
A consequence of the circular convolution theorem is that the DFT matrix diagonalizes any circulant matrix.
Expressing the inverse DFT in terms of the DFT
A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.)
First, we can compute the inverse DFT by reversing all but one of the inputs (Duhamel et al., 1988):
(As usual, the subscripts are interpreted modulo N; thus, for , we have .)
Second, one can also conjugate the inputs and outputs:
Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define as with its real and imaginary parts swapped—that is, if then is . Equivalently, equals . Then
That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988).
The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutory—that is, which is its own inverse. In particular, is clearly its own inverse: . A closely related involutory transformation (by a factor of ) is , since the factors in cancel the 2. For real inputs , the real part of is none other than the discrete Hartley transform, which is also involutory.
Eigenvalues and eigenvectors
The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Explicit formulas are given with a significant amount of number theory.
Consider the unitary form defined above for the DFT of length N, where
This matrix satisfies the matrix polynomial equation:
This can be seen from the inverse properties above: operating twice gives the original data in reverse order, so operating four times gives back the original data and is thus the identity matrix. This means that the eigenvalues satisfy the equation:
Therefore, the eigenvalues of are the fourth roots of unity: is +1, −1, +i, or −i.
Since there are only four distinct eigenvalues for this matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (There are N independent eigenvectors; a unitary matrix is never defective.)
The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by Gauss (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of N modulo 4, and is given by the following table:
Otherwise stated, the characteristic polynomial of is:
No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like orthogonality and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan et al., 2000; Hanna et al., 2004; Gurevich and Hadani, 2008).
One method to construct DFT eigenvectors to an eigenvalue is based on the linear combination of operators:
For an arbitrary vector , vector satisfies:
hence, vector is, indeed, the eigenvector of DFT matrix . Operators project vectors onto subspaces which are orthogonal for each value of . That is, for two eigenvectors, and we have:
However, in general, projection operator method does not produce orthogonal eigenvectors within one subspace. The operator can be seen as a matrix, whose columns are eigenvectors of , but they are not orthogonal. When a set of vectors , spanning -dimensional space (where is the multiplicity of eigenvalue ) is chosen to generate the set of eigenvectors to eigenvalue , the mutual orthogonality of is not guaranteed. However, the orthogonal set can be obtained by further applying orthogonalization algorithm to the set , e.g. Gram-Schmidt process.
A straightforward approach to obtain DFT eigenvectors is to discretize an eigenfunction of the continuous Fourier transform,
of which the most famous is the Gaussian function.
Since periodic summation of the function means discretizing its frequency spectrum
and discretization means periodic summation of the spectrum,
the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform:
The closed form expression for the series can be expressed by Jacobi theta functions as
Several other simple closed-form analytical eigenvectors for special DFT period N were found (Kong, 2008 and Casper-Yakimov, 2024):
For DFT period N = 2L + 1 = 4K + 1, where K is an integer, the following is an eigenvector of DFT:
For DFT period N = 2L = 4K, where K is an integer, the following are eigenvectors of DFT:
For DFT period N = 4K - 1, where K is an integer, the following are eigenvectors of DFT:
The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the fractional Fourier transform—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the continuous Fourier transform, the natural orthogonal eigenfunctions are the Hermite functions, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the Kravchuk polynomials (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however.
Uncertainty principles
Probabilistic uncertainty principle
If the random variable is constrained by
then
may be considered to represent a discrete probability mass function of , with an associated probability mass function constructed from the transformed variable,
For the case of continuous functions and , the Heisenberg uncertainty principle states that
where and are the variances of and respectively, with the equality attained in the case of a suitably normalized Gaussian distribution. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Still, a meaningful uncertainty principle has been introduced by Massar and Spindel.
However, the Hirschman entropic uncertainty will have a useful analog for the case of the DFT. The Hirschman uncertainty principle is expressed in terms of the Shannon entropy of the two probability functions.
In the discrete case, the Shannon entropies are defined as
and
and the entropic uncertainty principle becomes
The equality is obtained for equal to translations and modulations of a suitably normalized Kronecker comb of period where is any exact integer divisor of . The probability mass function will then be proportional to a suitably translated Kronecker comb of period .
Deterministic uncertainty principle
There is also a well-known deterministic uncertainty principle that uses signal sparsity (or the number of non-zero coefficients). Let and be the number of non-zero elements of the time and frequency sequences and , respectively. Then,
As an immediate consequence of the inequality of arithmetic and geometric means, one also has . Both uncertainty principles were shown to be tight for specifically chosen "picket-fence" sequences (discrete impulse trains), and find practical use for signal recovery applications.
DFT of real and purely imaginary signals
If are real numbers, as they often are in practical applications, then the DFT is even symmetric:
, where denotes complex conjugation.
It follows that for even and are real-valued, and the remainder of the DFT is completely specified by just complex numbers.
If are purely imaginary numbers, then the DFT is odd symmetric:
, where denotes complex conjugation.
Generalized DFT (shifted and non-linear phase)
It is possible to shift the transform sampling in time and/or frequency domain by some real shifts a and b, respectively. This is sometimes known as a generalized DFT (or GDFT), also called the shifted DFT or offset DFT, and has analogous properties to the ordinary DFT:
Most often, shifts of (half a sample) are used.
While the ordinary DFT corresponds to a periodic signal in both time and frequency domains, produces a signal that is anti-periodic in frequency domain () and vice versa for .
Thus, the specific case of is known as an odd-time odd-frequency discrete Fourier transform (or O2 DFT).
Such shifted transforms are most often used for symmetric data, to represent different boundary symmetries, and for real-symmetric data they correspond to different forms of the discrete cosine and sine transforms.
Another interesting choice is , which is called the centered DFT (or CDFT). The centered DFT has the useful property that, when N is a multiple of four, all four of its eigenvalues (see above) have equal multiplicities (Rubio and Santhanam, 2005)
The term GDFT is also used for the non-linear phase extensions of DFT. Hence, GDFT method provides a generalization for constant amplitude orthogonal block transforms including linear and non-linear phase types. GDFT is a framework
to improve time and frequency domain properties of the traditional DFT, e.g. auto/cross-correlations, by the addition of the properly designed phase shaping function (non-linear, in general) to the original linear phase functions (Akansu and Agirman-Tosun, 2010).
The discrete Fourier transform can be viewed as a special case of the z-transform, evaluated on the unit circle in the complex plane; more general z-transforms correspond to complex shifts a and b above.
Multidimensional DFT
The ordinary DFT transforms a one-dimensional sequence or array that is a function of exactly one discrete variable n. The multidimensional DFT of a multidimensional array that is a function of d discrete variables for in is defined by:
where as above and the d output indices run from . This is more compactly expressed in vector notation, where we define and as d-dimensional vectors of indices from 0 to , which we define as :
where the division is defined as to be performed element-wise, and the sum denotes the set of nested summations above.
The inverse of the multi-dimensional DFT is, analogous to the one-dimensional case, given by:
As the one-dimensional DFT expresses the input as a superposition of sinusoids, the multidimensional DFT expresses the input as a superposition of plane waves, or multidimensional sinusoids. The direction of oscillation in space is . The amplitudes are . This decomposition is of great importance for everything from digital image processing (two-dimensional) to solving partial differential equations. The solution is broken up into plane waves.
The multidimensional DFT can be computed by the composition of a sequence of one-dimensional DFTs along each dimension. In the two-dimensional case the independent DFTs of the rows (i.e., along ) are computed first to form a new array . Then the independent DFTs of y along the columns (along ) are computed to form the final result . Alternatively the columns can be computed first and then the rows. The order is immaterial because the nested summations above commute.
An algorithm to compute a one-dimensional DFT is thus sufficient to efficiently compute a multidimensional DFT. This approach is known as the row-column algorithm. There are also intrinsically multidimensional FFT algorithms.
The real-input multidimensional DFT
For input data consisting of real numbers, the DFT outputs have a conjugate symmetry similar to the one-dimensional case above:
where the star again denotes complex conjugation and the -th subscript is again interpreted modulo (for ).
Applications
The DFT has seen wide usage across a large number of fields; we only sketch a few examples below (see also the references at the end). All applications of the DFT depend crucially on the availability of a fast algorithm to compute discrete Fourier transforms and their inverses, a fast Fourier transform.
Spectral analysis
When the DFT is used for signal spectral analysis, the sequence usually represents a finite set of uniformly spaced time-samples of some signal , where represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (a.k.a. resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation.
A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at .
The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT.
As already stated, leakage imposes a limit on the inherent resolution of the DTFT, so there is a practical limit to the benefit that can be obtained from a fine-grained DFT.
Steps to Perform Spectral Analysis of Audio Signal
1.Recording and Pre-Processing the Audio Signal
Begin by recording the audio signal, which could be a spoken password, music, or any other sound. Once recorded, the audio signal is denoted as x[n], where n represents the discrete time index. To enhance the accuracy of spectral analysis, any unwanted noise should be reduced using appropriate filtering techniques.
2.Plotting the Original Time-Domain Signal
After noise reduction, the audio signal is plotted in the time domain to visualize its characteristics over time. This helps in understanding the amplitude variations of the signal as a function of time, which provides an initial insight into the signal's behavior.
3.Transforming the Signal from Time Domain to Frequency Domain
The next step is to transform the audio signal from the time domain to the frequency domain using the Discrete Fourier Transform (DFT). The DFT is defined as:
where N is the total number of samples, k represents the frequency index, and X[k] is the complex-valued frequency spectrum of the signal. The DFT allows for decomposing the signal into its constituent frequency components, providing a representation that indicates which frequencies are present and their respective magnitudes.
4.Plotting the Magnitude Spectrum
The magnitude of the frequency-domain representation X[k] is plotted to analyze the spectral content. The magnitude spectrum shows how the energy of the signal is distributed across different frequencies, which is useful for identifying prominent frequency components. It is calculated as:
Example
Analyze a discrete-time audio signal in the frequency domain using the DFT to identify its frequency components
Given Data
Let's consider a simple discrete-time audio signal represented as:
where n represents discrete time samples of the signal.
1.Time-Domain Signal Representation
The given time-domain signal is:
2.DFT Calculation
The DFT is calculated using the formula:
where N is the number of samples (in this case, N=4).
Let's compute X[k] for k=0,1,2,3
For k=0:
For k=1:
For k=2:
For k=3:
3.Magnitude Spectrum
The magnitude of X[k] represents the strength of each frequency component:
The resulting frequency components indicate the distribution of signal energy at different frequencies. The peaks in the magnitude spectrum correspond to dominant frequencies in the original signal.
Optics, diffraction, and tomography
The discrete Fourier transform is widely used with spatial frequencies in modeling the way that light, electrons, and other probes travel through optical systems and scatter from objects in two and three dimensions. The dual (direct/reciprocal) vector space of three dimensional objects further makes available a three dimensional reciprocal lattice, whose construction from translucent object shadows (via the Fourier slice theorem) allows tomographic reconstruction of three dimensional objects with a wide range of applications e.g. in modern medicine.
Filter bank
See and .
Data compression
The field of digital signal processing relies heavily on operations in the frequency domain (i.e. on the Fourier transform). For example, several lossy image and sound compression methods employ the discrete Fourier transform: the signal is cut into short segments, each is transformed, and then the Fourier coefficients of high frequencies, which are assumed to be unnoticeable, are discarded. The decompressor computes the inverse transform based on this reduced number of Fourier coefficients. (Compression applications often use a specialized form of the DFT, the discrete cosine transform or sometimes the modified discrete cosine transform.)
Some relatively recent compression algorithms, however, use wavelet transforms, which give a more uniform compromise between time and frequency domain than obtained by chopping data into segments and transforming each segment. In the case of JPEG2000, this avoids the spurious image features that appear when images are highly compressed with the original JPEG.
Partial differential equations
Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials , which are eigenfunctions of differentiation: . Thus, in the Fourier representation, differentiation is simple—we just multiply by . (However, the choice of is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method.
Polynomial multiplication
Suppose we wish to compute the polynomial product c(x) = a(x) · b(x). The ordinary product expression for the coefficients of c involves a linear (acyclic) convolution, where indices do not "wrap around." This can be rewritten as a cyclic convolution by taking the coefficient vectors for a(x) and b(x) with constant term first, then appending zeros so that the resultant coefficient vectors a and b have dimension . Then,
Where c is the vector of coefficients for c(x), and the convolution operator is defined so
But convolution becomes multiplication under the DFT:
Here the vector product is taken elementwise. Thus the coefficients of the product polynomial c(x) are just the terms 0, ..., deg(a(x)) + deg(b(x)) of the coefficient vector
With a fast Fourier transform, the resulting algorithm takes O(N log N) arithmetic operations. Due to its simplicity and speed, the Cooley–Tukey FFT algorithm, which is limited to composite sizes, is often chosen for the transform operation. In this case, d should be chosen as the smallest integer greater than the sum of the input polynomial degrees that is factorizable into small prime factors (e.g. 2, 3, and 5, depending upon the FFT implementation).
Multiplication of large integers
The fastest known algorithms for the multiplication of very large integers use the polynomial multiplication method outlined above. Integers can be treated as the value of a polynomial evaluated specifically at the number base, with the coefficients of the polynomial corresponding to the digits in that base (ex. ). After polynomial multiplication, a relatively low-complexity carry-propagation step completes the multiplication.
Convolution
When data is convolved with a function with wide support, such as for downsampling by a large sampling ratio, because of the Convolution theorem and the FFT algorithm, it may be faster to transform it, multiply pointwise by the transform of the filter and then reverse transform it. Alternatively, a good filter is obtained by simply truncating the transformed data and re-transforming the shortened data set.
Some discrete Fourier transform pairs
Generalizations
Representation theory
The DFT can be interpreted as a complex-valued representation of the finite cyclic group. In other words, a sequence of complex numbers can be thought of as an element of -dimensional complex space or equivalently a function from the finite cyclic group of order to the complex numbers, . So is a class function on the finite cyclic group, and thus can be expressed as a linear combination of the irreducible characters of this group, which are the roots of unity.
From this point of view, one may generalize the DFT to representation theory generally, or more narrowly to the representation theory of finite groups.
More narrowly still, one may generalize the DFT by either changing the target (taking values in a field other than the complex numbers), or the domain (a group other than a finite cyclic group), as detailed in the sequel.
Other fields
Many of the properties of the DFT only depend on the fact that is a primitive root of unity, sometimes denoted or (so that ). Such properties include the completeness, orthogonality, Plancherel/Parseval, periodicity, shift, convolution, and unitarity properties above, as well as many FFT algorithms. For this reason, the discrete Fourier transform can be defined by using roots of unity in fields other than the complex numbers, and such generalizations are commonly called number-theoretic transforms (NTTs) in the case of finite fields. For more information, see number-theoretic transform and discrete Fourier transform (general).
Other finite groups
The standard DFT acts on a sequence x0, x1, ..., xN−1 of complex numbers, which can be viewed as a function {0, 1, ..., N − 1} → C. The multidimensional DFT acts on multidimensional sequences, which can be viewed as functions
This suggests the generalization to Fourier transforms on arbitrary finite groups, which act on functions G → C where G is a finite group. In this framework, the standard DFT is seen as the Fourier transform on a cyclic group, while the multidimensional DFT is a Fourier transform on a direct sum of cyclic groups.
Further, Fourier transform can be on cosets of a group.
Alternatives
There are various alternatives to the DFT for various applications, prominent among which are wavelets. The analog of the DFT is the discrete wavelet transform (DWT). From the point of view of time–frequency analysis, a key limitation of the Fourier transform is that it does not include location information, only frequency information, and thus has difficulty in representing transients. As wavelets have location as well as frequency, they are better able to represent location, at the expense of greater difficulty representing frequency. For details, see comparison of the discrete wavelet transform with the discrete Fourier transform.
See also
Companion matrix
DFT matrix
Fast Fourier transform
FFTPACK
FFTW
Generalizations of Pauli matrices
Least-squares spectral analysis
List of Fourier-related transforms
Multidimensional transform
Zak transform
Quantum Fourier transform
Notes
References
Further reading
esp. section 30.2: The DFT and FFT, pp. 830–838.
(Note that this paper has an apparent typo in its table of the eigenvalue multiplicities: the +i/−i columns are interchanged. The correct table can be found in McClellan and Parks, 1972, and is easily confirmed numerically.)
"Digital Signal Processing" by Thomas Holton.
External links
Interactive explanation of the DFT
Matlab tutorial on the Discrete Fourier Transformation
Interactive flash tutorial on the DFT
Mathematics of the Discrete Fourier Transform by Julius O. Smith III
FFTW: Fast implementation of the DFT - coded in C and under General Public License (GPL)
General Purpose FFT Package: Yet another fast DFT implementation in C & FORTRAN, permissive license
Explained: The Discrete Fourier Transform
Discrete Fourier Transform
Indexing and shifting of Discrete Fourier Transform
Discrete Fourier Transform Properties
Generalized Discrete Fourier Transform (GDFT) with Nonlinear Phase
Fourier analysis
Digital signal processing
Numerical analysis
Discrete transforms
Unitary operators
cs:Fourierova transformace#Diskrétní Fourierova transformace
pt:Transformada de Fourier#Transformada discreta de Fourier
fi:Fourier'n muunnos#Diskreetti Fourier'n muunnos | Discrete Fourier transform | [
"Mathematics"
] | 7,557 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
8,815 | https://en.wikipedia.org/wiki/Dual%20polyhedron | In geometry, every polyhedron is associated with a second dual structure, where the vertices of one correspond to the faces of the other, and the edges between pairs of vertices of one correspond to the edges between pairs of faces of the other. Such dual figures remain combinatorial or abstract polyhedra, but not all can also be constructed as geometric polyhedra. Starting with any given polyhedron, the dual of its dual is the original polyhedron.
Duality preserves the symmetries of a polyhedron. Therefore, for many classes of polyhedra defined by their symmetries, the duals belong to a corresponding symmetry class. For example, the regular polyhedrathe (convex) Platonic solids and (star) Kepler–Poinsot polyhedraform dual pairs, where the regular tetrahedron is self-dual. The dual of an isogonal polyhedron (one in which any two vertices are equivalent under symmetries of the polyhedron) is an isohedral polyhedron (one in which any two faces are equivalent [...]), and vice versa. The dual of an isotoxal polyhedron (one in which any two edges are equivalent [...]) is also isotoxal.
Duality is closely related to polar reciprocity, a geometric transformation that, when applied to a convex polyhedron, realizes the dual polyhedron as another convex polyhedron.
Kinds of duality
There are many kinds of duality. The kinds most relevant to elementary polyhedra are polar reciprocity and topological or abstract duality.
Polar reciprocation
In Euclidean space, the dual of a polyhedron is often defined in terms of polar reciprocation about a sphere. Here, each vertex (pole) is associated with a face plane (polar plane or just polar) so that the ray from the center to the vertex is perpendicular to the plane, and the product of the distances from the center to each is equal to the square of the radius.
When the sphere has radius and is centered at the origin (so that it is defined by the equation ), then the polar dual of a convex polyhedron is defined as
where denotes the standard dot product of and .
Typically when no sphere is specified in the construction of the dual, then the unit sphere is used, meaning in the above definitions.
For each face plane of described by the linear equation
the corresponding vertex of the dual polyhedron will have coordinates . Similarly, each vertex of corresponds to a face plane of , and each edge line of corresponds to an edge line of . The correspondence between the vertices, edges, and faces of and reverses inclusion. For example, if an edge of contains a vertex, the corresponding edge of will be contained in the corresponding face.
For a polyhedron with a center of symmetry, it is common to use a sphere centered on this point, as in the Dorman Luke construction (mentioned below). Failing that, for a polyhedron with a circumscribed sphere, inscribed sphere, or midsphere (one with all edges as tangents), this can be used. However, it is possible to reciprocate a polyhedron about any sphere, and the resulting form of the dual will depend on the size and position of the sphere; as the sphere is varied, so too is the dual form. The choice of center for the sphere is sufficient to define the dual up to similarity.
If a polyhedron in Euclidean space has a face plane, edge line, or vertex lying on the center of the sphere, the corresponding element of its dual will go to infinity. Since Euclidean space never reaches infinity, the projective equivalent, called extended Euclidean space, may be formed by adding the required 'plane at infinity'. Some theorists prefer to stick to Euclidean space and say that there is no dual. Meanwhile, found a way to represent these infinite duals, in a manner suitable for making models (of some finite portion).
The concept of duality here is closely related to the duality in projective geometry, where lines and edges are interchanged. Projective polarity works well enough for convex polyhedra. But for non-convex figures such as star polyhedra, when we seek to rigorously define this form of polyhedral duality in terms of projective polarity, various problems appear.
Because of the definitional issues for geometric duality of non-convex polyhedra, argues that any proper definition of a non-convex polyhedron should include a notion of a dual polyhedron.
Canonical duals
Any convex polyhedron can be distorted into a canonical form, in which a unit midsphere (or intersphere) exists tangent to every edge, and such that the average position of the points of tangency is the center of the sphere. This form is unique up to congruences.
If we reciprocate such a canonical polyhedron about its midsphere, the dual polyhedron will share the same edge-tangency points, and thus will also be canonical. It is the canonical dual, and the two together form a canonical dual compound.
Dorman Luke construction
For a uniform polyhedron, each face of the dual polyhedron may be derived from the original polyhedron's corresponding vertex figure by using the Dorman Luke construction.
Topological duality
Even when a pair of polyhedra cannot be obtained by reciprocation from each other, they may be called duals of each other as long as the vertices of one correspond to the faces of the other, and the edges of one correspond to the edges of the other, in an incidence-preserving way. Such pairs of polyhedra are still topologically or abstractly dual.
The vertices and edges of a convex polyhedron form a graph (the 1-skeleton of the polyhedron), embedded on the surface of the polyhedron (a topological sphere). This graph can be projected to form a Schlegel diagram on a flat plane. The graph formed by the vertices and edges of the dual polyhedron is the dual graph of the original graph.
More generally, for any polyhedron whose faces form a closed surface, the vertices and edges of the polyhedron form a graph embedded on this surface, and the vertices and edges of the (abstract) dual polyhedron form the dual graph of the original graph.
An abstract polyhedron is a certain kind of partially ordered set (poset) of elements, such that incidences, or connections, between elements of the set correspond to incidences between elements (faces, edges, vertices) of a polyhedron. Every such poset has a dual poset, formed by reversing all of the order relations. If the poset is visualized as a Hasse diagram, the dual poset can be visualized simply by turning the Hasse diagram upside down.
Every geometric polyhedron corresponds to an abstract polyhedron in this way, and has an abstract dual polyhedron. However, for some types of non-convex geometric polyhedra, the dual polyhedra may not be realizable geometrically.
Self-dual polyhedra
Topologically, a polyhedron is said to be self-dual if its dual has exactly the same connectivity between vertices, edges, and faces. Abstractly, they have the same Hasse diagram. Geometrically, it is not only topologically self-dual, but its polar reciprocal about a certain point, typically its centroid, is a similar figure. For example, the dual of a regular tetrahedron is another regular tetrahedron, reflected through the origin.
Every polygon is topologically self-dual, since it has the same number of vertices as edges, and these are switched by duality. But it is not necessarily self-dual (up to rigid motion, for instance). Every polygon has a regular form which is geometrically self-dual about its intersphere: all angles are congruent, as are all edges, so under duality these congruences swap. Similarly, every topologically self-dual convex polyhedron can be realized by an equivalent geometrically self-dual polyhedron, its canonical polyhedron, reciprocal about the center of the midsphere.
There are infinitely many geometrically self-dual polyhedra. The simplest infinite family is the pyramids. Another infinite family, elongated pyramids, consists of polyhedra that can be roughly described as a pyramid sitting on top of a prism (with the same number of sides). Adding a frustum (pyramid with the top cut off) below the prism generates another infinite family, and so on. There are many other convex self-dual polyhedra. For example, there are 6 different ones with 7 vertices and 16 with 8 vertices.
A self-dual non-convex icosahedron with hexagonal faces was identified by Brückner in 1900. Other non-convex self-dual polyhedra have been found, under certain definitions of non-convex polyhedra and their duals.
Dual polytopes and tessellations
Duality can be generalized to n-dimensional space and dual polytopes; in two dimensions these are called dual polygons.
The vertices of one polytope correspond to the (n − 1)-dimensional elements, or facets, of the other, and the j points that define a (j − 1)-dimensional element will correspond to j hyperplanes that intersect to give a (n − j)-dimensional element. The dual of an n-dimensional tessellation or honeycomb can be defined similarly.
In general, the facets of a polytope's dual will be the topological duals of the polytope's vertex figures. For the polar reciprocals of the regular and uniform polytopes, the dual facets will be polar reciprocals of the original's vertex figure. For example, in four dimensions, the vertex figure of the 600-cell is the icosahedron; the dual of the 600-cell is the 120-cell, whose facets are dodecahedra, which are the dual of the icosahedron.
Self-dual polytopes and tessellations
The primary class of self-dual polytopes are regular polytopes with palindromic Schläfli symbols. All regular polygons, {a} are self-dual, polyhedra of the form {a,a}, 4-polytopes of the form {a,b,a}, 5-polytopes of the form {a,b,b,a}, etc.
The self-dual regular polytopes are:
All regular polygons, {a}.
Regular tetrahedron: {3,3}
In general, all regular n-simplexes, {3,3,...,3}
The regular 24-cell in 4 dimensions, {3,4,3}.
The great 120-cell {5,5/2,5} and the grand stellated 120-cell {5/2,5,5/2}
The self-dual (infinite) regular Euclidean honeycombs are:
Apeirogon: {∞}
Square tiling: {4,4}
Cubic honeycomb: {4,3,4}
In general, all regular n-dimensional Euclidean hypercubic honeycombs: {4,3,...,3,4}.
The self-dual (infinite) regular hyperbolic honeycombs are:
Compact hyperbolic tilings: {5,5}, {6,6}, ... {p,p}.
Paracompact hyperbolic tiling: {∞,∞}
Compact hyperbolic honeycombs: {3,5,3}, {5,3,5}, and {5,3,3,5}
Paracompact hyperbolic honeycombs: {3,6,3}, {6,3,6}, {4,4,4}, and {3,3,4,3,3}
See also
Conway polyhedron notation
Dual polygon
Self-dual graph
Self-dual polygon
References
Notes
Bibliography
.
.
.
.
.
.
.
External links
Polyhedra
Polyhedron
Polytopes | Dual polyhedron | [
"Mathematics"
] | 2,508 | [
"Mathematical structures",
"Category theory",
"Duality theories",
"Geometry"
] |
8,826 | https://en.wikipedia.org/wiki/Davy%20lamp | The Davy lamp is a safety lamp used in flammable atmospheres, invented in 1815 by Sir Humphry Davy. It consists of a wick lamp with the flame enclosed inside a mesh screen. It was created for use in coal mines, to reduce the danger of explosions due to the presence of methane and other flammable gases, called firedamp or minedamp.
History
German polymath Alexander von Humboldt, working for the German Bureau of Mines, had concerns for the health and welfare of the miners and invented a kind of respirator and "four lamps of different construction suitable for employment in various circumstances. The respirator was to prevent the inhaling of injurious gases, and to supply the miner with good air; the lamps were constructed to burn in the most inflammable kind of fire-damp without igniting the gas. They were the forerunners of Davy's later invention, and were frequently made use of by the miners."
Davy's invention was preceded by that of William Reid Clanny, an Irish doctor at Bishopwearmouth, who had read a paper to the Royal Society in May 1813. The more cumbersome Clanny safety lamp was successfully tested at Herrington Mill, and he won medals, from the Royal Society of Arts.
Despite his lack of scientific knowledge, engine-wright George Stephenson devised a lamp in which the air entered via tiny holes, through which the flames of the lamp could not pass. A month before Davy presented his design to the Royal Society, Stephenson demonstrated his own lamp to two witnesses by taking it down Killingworth Colliery and holding it in front of a fissure from which firedamp was issuing.
The first trial of a Davy lamp with a wire sieve was at Hebburn Colliery on 9 January 1816. A letter from Davy (which he intended to be kept private) describing his findings and various suggestions for a safety lamp was made public at a meeting in Newcastle on 3 November 1815, and a paper describing the lamp was formally presented at a Royal Society meeting in London on 9 November. For it, Davy was awarded the society's Rumford Medal. Davy's lamp differed from Stephenson's in that the flame was surrounded by a screen of gauze, whereas Stephenson's prototype lamp had a perforated plate contained in a glass cylinder (a design mentioned in Davy's Royal Society paper as an alternative to his preferred solution). For his invention Davy was given £2,000 worth of silver (the money being raised by public subscription), whilst Stephenson was accused of stealing the idea from Davy, because the fully developed 'Geordie lamp' had not been demonstrated by Stephenson until after Davy had presented his paper at the Royal Society, and (it was held) previous versions had not actually been safe.
A local committee of enquiry gathered in support of Stephenson exonerated him, showing that he had been working separately to create the Geordie lamp, and raised a subscription for him of £1,000. Davy and his supporters refused to accept their findings, and would not see how an uneducated man such as Stephenson could come up with the solution he had: Stephenson himself freely admitted that he had arrived at a practical solution on the basis of an erroneous theory. In 1833, a House of Commons committee found that Stephenson had equal claim to having invented the safety lamp. Davy went to his grave claiming that Stephenson had stolen his idea. The Stephenson lamp was used almost exclusively in North East England, whereas the Davy lamp was used everywhere else. The experience gave Stephenson a lifelong distrust of London-based, theoretical, scientific experts.
Design and theory
The lamp consists of a wick lamp with the flame enclosed inside a mesh screen. The screen acts as a flame arrestor; air (and any firedamp present) can pass through the mesh freely enough to support combustion, but the holes are too fine to allow a flame to propagate through them and ignite any firedamp outside the mesh. The Davy lamp was fuelled by oil or naphtha (lighter fluid).
The lamp also provided a test for the presence of gases. If flammable gas mixtures were present, the flame of the Davy lamp burned higher with a blue tinge. Lamps were equipped with a metal gauge to measure the height of the flame. Miners could place the safety lamp close to the ground to detect gases, such as carbon dioxide, that are denser than air and so could collect in depressions in the mine; if the mine air was oxygen-poor (asphyxiant gas), the lamp flame would be extinguished (black damp or chokedamp). A methane-air flame is extinguished at about 17% oxygen content (which will still support life), so the lamp gave an early indication of an unhealthy atmosphere, allowing the miners to get out before they died of asphyxiation.
Impact
In 1816, the Cumberland Pacquet reported a demonstration of the Davy lamp at William Pit, Whitehaven. Placed in a blower "... the effect was grand beyond description. At first a blue flame was seen to cap the flame of the lamp, – then succeeded a lambent flame, playing in the cylinder; and shortly after, the flame of the firedamp expanded, so as to completely fill the wire gauze. For some time, the flame of the lamp was seen through that of the firedamp, which became ultimately extinguished without explosion. Results more satisfactory were not to be wished..." Another correspondent to the paper commented "The Lamp offers absolute security to the miner... With the excellent ventilation of the Whitehaven Collieries and the application of Sir HUMPHRY's valuable instrument, the accidents from the explosion of' (carburetted) 'hydrogene which have occurred (although comparatively few for such extensive works) will by this happy invention be avoided".
However, this prediction was not fulfilled: in the next thirty years, firedamp explosions in Whitehaven pits killed 137 people. More generally, the Select Committee on Accidents in Mines reported in 1835 that the introduction of the Davy lamp had led to an increase in mine accidents; the lamp encouraged the working of mines and parts of mines that had previously been closed for safety reasons. For example, in 1835, 102 men and boys were killed by a firedamp explosion in a Wallsend colliery working the Bensham seam, described at the subsequent inquest by John Buddle as "a dangerous seam, which required the utmost care in keeping in a working state", which could only be worked with the Davy lamp. The coroner noted that a previous firedamp explosion in 1821 had killed 52, but directed his jury that any finding on the wisdom of continuing to work the seam was outside their province.
The lamps had to be provided by the miners themselves, not the owners, as traditionally the miners had bought their own candles from the company store. Miners still preferred the better illumination from a naked light, and mine regulations insisting that only safety lamps be used were draconian in principle, but in practice neither observed nor enforced. After two accidents in two years (1838–39) in Cumberland pits, both caused by safety checks being carried out by the light of a naked flame, the Royal Commission on Children's Employment commented both on the failure to learn from the first accident, and on the "further absurdity" of "carrying a Davy lamp in one hand for the sake of safety, and a naked lighted candle in the other, as if for the sake of danger. Beyond this there can be no conceivable thoughtlessness and folly; and when such management is allowed in the mine of two of the most opulent coal-proprietors in the kingdom, we cease to wonder at anything that may take place in mines worked by men equally without capital and science"
Another reason for the increase in accidents was the unreliability of the lamps themselves. The bare gauze was easily damaged, and once just a single wire broke or rusted away, the lamp became unsafe.
Work carried out by a scientific witness and reported by the committee showed that the Davy lamp became unsafe in airflows so low that a Davy lamp carried at normal walking pace against normal airflows in walkways was only safe if provided with a draught shield (not normally fitted), and the committee noted that accidents had happened when the lamp was "in general and careful use; no one survived to tell the tale of how these occurrences took place; conjecture supplied the want of positive knowledge most unsatisfactorily; but incidents are recorded which prove what must follow unreasonable testing of the lamp; and your Committee are constrained to believe that ignorance and a false reliance upon its merits, in cases attended with unwarrantable risks, have led to disastrous consequences" The "South Shields Committee", a body set up by a public meeting there (in response to an explosion at the St Hilda pit in 1839) to consider the prevention of accidents in mines had shown that mine ventilation in the North-East was generally deficient, with an insufficient supply of fresh air giving every opportunity for explosive mixtures of gas to accumulate. A subsequent select committee in 1852 concurred with this view; firedamp explosions could best be prevented by improving mine ventilation (by the use of steam ejectors: the committee specifically advised against fan ventilation), which had been neglected because of over-reliance on the safety of the Davy lamp.
The practice of using a Davy lamp and a candle together was not entirely absurd, however, if the Davy lamp is understood to be not only a safe light in an explosive atmosphere, but also a gauge of firedamp levels. In practice, however, the warning from the lamp was not always noticed in time, especially in the working conditions of the era.
The Regulation and Inspection of Mines Act of 1860 therefore required coal mines to have an adequate amount of ventilation, constantly produced, to dilute and render harmless noxious gases so that work areas were – under ordinary circumstances – in a fit state to be worked (areas where a normally safe atmosphere could not be ensured were to be fenced off "as far as possible"): it also required safety lamps to be examined and securely locked by a duly authorised person before use.
Even when new and clean, illumination from the safety lamps was very poor, and the problem was not fully resolved until electric lamps became widely available in the late 19th century.
Successors
A modern-day equivalent of the Davy lamp has been used in the Olympic flame torch relays. It was used in the relays for the Sydney, Athens, Turin, Beijing, Vancouver and Singapore Youth Olympic Games. It was also used for the Special Olympics Shanghai, Pan American and Central African games and for the London 2012 Summer Olympics relay.
Lamps are still made in Eccles, Greater Manchester; in Aberdare, South Wales; and in Kolkata, India.
A replica of a Davy lamp is located in front of the ticket office at the Stadium of Light (Sunderland AFC) which is built on a former coal mine.
In 2015, the bicentenary of Davy's invention, the former Bersham Colliery, in Wrexham, Wales, now a mining museum, hosted an event for members of the public to bring in their Davy lamps for identification. The National Mining Museum Scotland at Newtongrange, Scotland, also celebrated the 200th anniversary of the invention. In 2016, the Royal Institution of Great Britain, where the Davy lamp prototype is displayed, decided to have the invention 3D scanned, reverse engineered and presented to the museum visitors in a more accessible digital format via a virtual reality cabinet. At first sight it appears to be a traditional display cabinet but has a touch screen with various options for visitors to view and reference the virtual exhibits inside.
Notes
References
Further reading
External links
Popular Science video showing an experiment that demonstrates the principle of the Davy lamp
Edwards, Eric The Miners' Safety Lamp at Pitt Rivers Museum, Oxford University
Humphry Davy Brief bio at Spartacus Educational
English inventions
History of mining
Oil lamp
Mine safety
Mining equipment
1815 introductions
Davy family
19th-century inventions
de:Grubenlampe#Sicherheitsgrubenlampen | Davy lamp | [
"Engineering"
] | 2,473 | [
"Mining equipment"
] |
11,905,697 | https://en.wikipedia.org/wiki/Faggoting%20%28metalworking%29 | Faggoting or faggoting and folding is a metalworking technique used in the smelting and forging of wrought iron, blister steel, and other steel. Faggoting is a process in which rods or bars of iron and/or steel are gathered (like a bundle of sticks or "faggot") and forge welded together. The faggot would then be drawn out lengthwise. The bar might then be broken and the pieces made into a faggot again or folded over, and forge welded again.
Wrought iron which had been faggoted twice was referred to as "Best"; if faggoted again it would become "Best Best", then "Treble best", etc. Faggoting stretches chemical impurities within the metal into long thin inclusions, creating a grain within the metal. "Best" bars would have a tensile strength along the grain of about 23 short tons per square inch (). "Treble best" could reach 28 short tons per square inch (). The strengths across the grain would be about 15% lower. This grain makes wrought iron especially tricky to forge, as it behaves much like wood grain—prone to spontaneous splitting along the grain if worked too cold. Wrought iron, especially less refined iron, must be worked at or near a forge welding heat, that is incandescent and white in color. In old, very rusted pieces of wrought iron, the grain is revealed, making the iron bear a striking resemblance to reddish-brown wood, and if it is rusted into the grain too deeply, it will need to be refined once more before reforging it.
Blister steel that has been faggoted was known as shear steel; if faggoted twice, as double shear steel; and if faggoted three times, as triple shear steel. Steel that was intended to be treated this way was carburised, causing little bubbles on the surface of the material, hence the name "blister steel". It was then forge welded together to refine it and work the carbon throughout the material, instead of just on the surface.
References
Metalworking
Steelmaking | Faggoting (metalworking) | [
"Chemistry"
] | 446 | [
"Metallurgical processes",
"Steelmaking"
] |
11,907,750 | https://en.wikipedia.org/wiki/MIDI%20beat%20clock | MIDI beat clock, or simply MIDI clock, is a clock signal that is broadcast via MIDI to ensure that several MIDI-enabled devices such as a synthesizer or music sequencer stay in synchronization. Clock events are sent at a rate of 24 pulses per quarter note. Those pulses are used to maintain a synchronized tempo for synthesizers that have BPM-dependent voices and also for arpeggiator synchronization.
MIDI beat clock differs from MIDI timecode in that MIDI beat clock is tempo-dependent.
Location information can be specified using MIDI Song Position Pointer (SPP, see below), although many simple MIDI devices ignore this message.
Messages
MIDI beat clock defines the following real-time messages:
clock (decimal 248, hex 0xF8)
start (decimal 250, hex 0xFA)
continue (decimal 251, hex 0xFB)
stop (decimal 252, hex 0xFC)
MIDI also specifies a System Common message called Song Position Pointer (SPP). SPP can be used in conjunction with the above real-time messages for complete sync. This message consists of 3 bytes; a status byte (decimal 242, hex 0xF2), followed by two 7-bit data bytes (least significant byte first) forming a 14-bit value that specifies the number of "MIDI beats" (1 MIDI beat = a 16th note = 6 clock pulses) since the start of the song. This message only needs to be sent once if a jump to a different position in the song is needed. Thereafter only real-time clock messages need to be sent to advance the song position one tick at a time.
Pulses per quarter note
Pulses per quarter note (PPQN), also known as pulses per quarter (PPQ), and ticks per quarter note (TPQN), is the smallest unit of time used for sequencing note and automation events.
The number of pulses per quarter note is sometimes referred to as the resolution of a MIDI device, and affects the timing of notes that can be achieved by a sequencer. If the resolution is too low (too few PPQN), the performance recorded into the sequencer may sound artificial (being quantised by the pulse rate), losing all the subtle variations in timing that give the music a "human" feeling. Purposefully quantised music can have resolutions as low as 24 (the standard for Sync24 and MIDI, which allows triplets, and swinging by counting alternate numbers of clock ticks) or even 4 PPQN (which has only one clock pulse per 16th note). At the other end of the spectrum, modern computer-based MIDI sequencers designed to capture more nuance may use 960 PPQN and beyond.
This resolution is a measure of time relative to tempo since the tempo defines the length of a quarter note and so the duration of each pulse. The resulting PPQN per MIDI-Clock is thus related to the TimeBase in Microseconds defined as 60.000.000 / MicroTempo = Beats per minute.
See also
DIN sync
Word clock
References
External links
Freeware to measure a midiclock beat signal
MAX/MSP documentation to their sync~ object
MIDI specification
Summary of MIDI messages
Song Position Pointer (SPP)
PPQN Timing Calculator
Explanation on Sweetwater
Information Retrieval for Music and Motion, Meinard Müller, Springer Science & Business Media, 09.09.2007 - 318 pages
Encodings
MIDI standards
Synchronization | MIDI beat clock | [
"Engineering"
] | 709 | [
"Telecommunications engineering",
"Synchronization"
] |
11,908,009 | https://en.wikipedia.org/wiki/91%20Aquarii%20b | 91 Aquarii b, also known as HD 219449 b, is an extrasolar planet orbiting in the 91 Aquarii system approximately 148 light-years away in the constellation of Aquarius. It orbits at the average distance of 105 Gm from its star, which is slightly closer than Venus is to the sun (108 Gm). The planet takes half an Earth year to orbit around the star in a very circular orbit with eccentricity less than 0.053.
See also
HD 59686 b
Iota Draconis b
References
Aquarius (constellation)
Giant planets
Exoplanets discovered in 2003
Exoplanets detected by radial velocity | 91 Aquarii b | [
"Astronomy"
] | 134 | [
"Constellations",
"Aquarius (constellation)"
] |
11,908,069 | https://en.wikipedia.org/wiki/Polyconvex%20function | In the calculus of variations, the notion of polyconvexity is a generalization of the notion of convexity for functions defined on spaces of matrices. The notion of polyconvexity was introduced by John M. Ball as a sufficient conditions for proving the existence of energy minimizers in nonlinear elasticity theory. It is satisfied by a large class of hyperelastic stored energy densities, such as Mooney-Rivlin and Ogden materials. The notion of polyconvexity is related to the notions of convexity, quasiconvexity and rank-one convexity through the following diagram:
Motivation
Let be an open bounded domain, and denote the Sobolev space of mappings from to . A typical problem in the calculus of variations is to minimize a functional, of the form
,
where the energy density function, satisfies -growth, i.e., for some and . It is well-known from a theorem of Morrey and Acerbi-Fusco that a necessary and sufficient condition for to weakly lower-semicontinuous on is that is quasiconvex for almost every . With coercivity assumptions on and boundary conditions on , this leads to the existence of minimizers for on . However, in many applications, the assumption of -growth on the energy density is often too restrictive. In the context of elasticity, this is because the energy is required to grow unboundedly to as local measures of volume approach zero. This led Ball to define the more restrictive notion of polyconvexity to prove the existence of energy minimizers in nonlinear elasticity.
Definition
A function is said to be polyconvex if there exists a convex function such that
where is such that
Here, stands for the matrix of all minors of the matrix , and
where .
When , and when , , where denotes the cofactor matrix of .
In the above definitions, the range of can also be extended to .
Properties
If takes only finite values, then polyconvexity implies quasiconvexity and thus leads to the weak lower semicontinuity of the corresponding integral functional on a Sobolev space.
If or , then polyconvexity reduces to convexity.
If is polyconvex, then it is locally Lipschitz.
Polyconvex functions with subquadratic growth must be convex, i.e., if there exists and such that
for every , then is convex.
Examples
Every convex function is polyconvex.
For the case , the determinant function is polyconvex, but not convex. In particular, the following type of function that commonly appears in nonlinear elasticity is polyconvex but not convex:
References
Convex analysis
Calculus of variations
Matrices
Types of functions | Polyconvex function | [
"Mathematics"
] | 566 | [
"Functions and mappings",
"Mathematical objects",
"Matrices (mathematics)",
"Mathematical relations",
"Types of functions"
] |
11,908,132 | https://en.wikipedia.org/wiki/HD%2010647%20b | HD 10647 b, also catalogued as q1 Eridani b, is an extrasolar planet approximately 57 light-years away in the constellation of Eridanus (the River). The planet is a mid-Jovian that orbits 103% farther from the star than Earth to the Sun. It takes about 33 months to orbit with semi-amplitude of 17.9 m/s.
See also
51 Pegasi b
91 Aquarii b
109 Piscium b
Epsilon Eridani b
References
External links
Eridanus (constellation)
Exoplanets discovered in 2003
Giant planets
Exoplanets detected by radial velocity
de:HD 10647 b | HD 10647 b | [
"Astronomy"
] | 135 | [
"Eridanus (constellation)",
"Constellations"
] |
11,908,190 | https://en.wikipedia.org/wiki/HD%2059686 | HD 59686 is a binary star system in the northern constellation of Gemini. It is visible to the naked eye as a dim point of light with an apparent visual magnitude of +5.45. The distance to this system is approximately 292 light years based on parallax, but it is drifting closer with a radial velocity of −34 km/s.
This is a single-lined spectroscopic binary system with an orbital period of and a high eccentricity of 0.73. The visible component is an aging giant star with a stellar classification of K2III, meaning it has ceased fusing hydrogen in its core and on its way to becoming a red giant. The stellar radius is very large: 11.2 times that of the Sun. The star is around 2.7 billion years old with 1.4 times the mass of the Sun. It is radiating 58 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,670 K.
The secondary component has a minimum mass 53% that of the Sun, which indicates it must be a star rather than a brown dwarf or a planet.
Planetary system
In November 2003, a planet was announced orbiting the giant star. A doppler spectrometer was used to look for effects on the star caused by the gravitational tug of the orbiting planet. Using the amplitude from the radial velocity method, he calculated the planetary mass as 5.25 Jupiter masses and with period 303 days. However that mass is only a minimum because the inclination of the orbit is not known. Using the stellar mass and period, he calculated the semimajor axis as 0.911 astronomical units. He found that the shape of the stellar wobble is circular, implying that the planet has zero eccentricity.
References
External links
K-type giants
Planetary systems with one confirmed planet
Spectroscopic binaries
Gemini (constellation)
Durchmusterung objects
059686
036616
2877 | HD 59686 | [
"Astronomy"
] | 397 | [
"Gemini (constellation)",
"Constellations"
] |
11,908,534 | https://en.wikipedia.org/wiki/Browder%E2%80%93Minty%20theorem | In mathematics, the Browder–Minty theorem (sometimes called the Minty–Browder theorem) states that a bounded, continuous, coercive and monotone function T from a real, separable reflexive Banach space X into its continuous dual space X∗ is automatically surjective. That is, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g. (Note that T itself is not required to be a linear map.)
The theorem is named in honor of Felix Browder and George J. Minty, who independently proved it.
See also
Pseudo-monotone operator; pseudo-monotone operators obey a near-exact analogue of the Browder–Minty theorem.
References
(Theorem 10.49)
Banach spaces
Theorems in functional analysis
Operator theory | Browder–Minty theorem | [
"Mathematics"
] | 175 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
11,909,359 | https://en.wikipedia.org/wiki/Timeline%20of%20algebra | The following is a timeline of key developments of algebra:
See also
History of algebra – Historical development of algebra
References
History of algebra
Algebra | Timeline of algebra | [
"Mathematics"
] | 28 | [
"History of algebra",
"Algebra"
] |
11,909,413 | https://en.wikipedia.org/wiki/RF%20and%20microwave%20filter | Radio frequency (RF) and microwave filters represent a class of electronic filter, designed to operate on signals in the megahertz to gigahertz frequency ranges (medium frequency to extremely high frequency). It is component that is used in electronic systems to pass or reject specific frequencies and attenuate of unwanted signals within the microwave and RF range. This frequency range is the range used by most broadcast radio, television, wireless communication (cellphones, Wi-Fi, etc.), and thus most RF and microwave devices will include some kind of filtering on the signals transmitted or received. Such filters are commonly used as building blocks for duplexers and diplexers to combine or separate multiple frequency bands.
Filter functions
Four general filter functions are desirable:
Band-pass filter: select only a desired band of frequencies
Band-stop filter: eliminate an undesired band of frequencies
Low-pass filter: allow only frequencies below a cutoff frequency to pass
High-pass filter: allow only frequencies above a cutoff frequency to pass
Filter technologies
In general, most RF and microwave filters are most often made up of one or more coupled resonators, and thus any technology that can be used to make resonators can also be used to make filters. The unloaded quality factor of the resonators being used will generally set the selectivity the filter can achieve. The book by Matthaei, Young and Jones provides a good reference to the design and realization of RF and microwave filters. Generalized filter theory operates with resonant frequencies and coupling coefficients of coupled resonators in a microwave filter.
Lumped-element LC filters
The simplest resonator structure that can be used in rf and microwave filters is an LC tank circuit consisting of parallel or series inductors and capacitors. These have the advantage of being very compact, but the low quality factor of the resonators leads to relatively poor performance.
Lumped-Element LC filters have both an upper and lower frequency range. As the frequency gets very low, into the low kHz to Hz range the size of the inductors used in the tank circuit becomes prohibitively large. Very low frequency filters are often designed with crystals to overcome this problem.
As the frequency gets higher, into the 600 MHz and higher range, the inductors in the tank circuit become too small to be practical. Since the electrical reactance of an inductor of a certain inductance increases linearly with respect to frequency, at higher frequencies, to achieve the same reactance, a prohibitively low inductance may be required.
Planar filters
Planar transmission lines, such as microstrip, coplanar waveguide and stripline, can also make good resonators and filters. The processes used to manufacture microstrip circuits is very similar to the processes used to manufacture printed circuit boards and these filters have the advantage of largely being planar.
Precision planar filters are manufactured using a thin-film process. Higher Q factors can be obtained by using low loss tangent dielectric materials for the substrate such as quartz or sapphire and lower resistance metals such as gold.
Coaxial filters
Coaxial transmission lines provide higher quality factor than planar transmission lines, and are thus used when higher performance is required. The coaxial resonators may make use of high-dielectric constant materials to reduce their overall size.
Cavity filters
Still widely used in the 40 MHz to 960 MHz frequency range, well constructed cavity filters are capable of high selectivity even under power loads of at least a megawatt. Higher Q quality factor, as well as increased performance stability at closely spaced (down to 75 kHz) frequencies, can be achieved by increasing the internal volume of the filter cavities.
Physical length of conventional cavity filters can vary from over 205 cm in the 40 MHz range, down to under 27.5 cm in the 900 MHz range.
In the microwave range (1000 MHz and up), cavity filters become more practical in terms of size and a significantly higher quality factor than lumped element resonators and filters.
Dielectric filters
Pucks made of various dielectric materials can also be used to make resonators. As with the coaxial resonators, high-dielectric constant materials may be used to reduce the overall size of the filter. With low-loss dielectric materials, these can offer significantly higher performance than the other technologies previously discussed.
Electroacoustic filters
Electroacoustic resonators based on piezoelectric materials can be used for filters. Since acoustic wavelength at a given frequency is several orders of magnitude shorter than the electrical wavelength, electroacoustic resonators are generally smaller by size and weight than electromagnetic counterparts such as cavity resonators.
A common example of an electroacoustic resonator is the quartz resonator which essentially is a cut of a piezoelectric quartz crystal clamped by a pair of electrodes. This technology is limited to some tens of megahertz. For microwave frequencies, typically more than 100 MHz, most filters are using thin film technologies such as surface acoustic wave (SAW) and, thin-film bulk acoustic resonator (FBAR, TFBAR) based structures.
Waveguide filter
The waffle-iron filter is an example.
Energy tunneling-based filters
These are the new class of highly tunable microwave filters. These special kinds of filters can be implemented on waveguides, SIW or on low-cost PCB technology and can be tuned to any lower or higher frequency with the help of switches inserted at appropriate positions to achieve a broad tuning range.
Notes
External links
Article on microwave filter at Microwaves 101
A primer on RF filters for Software-defined Radio
Analog circuits
Distributed element circuits
Wireless tuning and filtering | RF and microwave filter | [
"Engineering"
] | 1,176 | [
"Radio electronics",
"Wireless tuning and filtering",
"Analog circuits",
"Electronic engineering",
"Distributed element circuits"
] |
11,911,464 | https://en.wikipedia.org/wiki/Model%20of%20hierarchical%20complexity | The model of hierarchical complexity (MHC) is a framework for scoring how complex a behavior is, such as verbal reasoning or other cognitive tasks. It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. This model was developed by Michael Commons and Francis Richards in the early 1980s.
Overview
The model of hierarchical complexity (MHC) is a formal theory and a mathematical psychology framework for scoring how complex a behavior is. Developed by Michael Lamport Commons and colleagues, it quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. Its forerunner was the general stage model.
Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the hierarchical complexity of task accomplishment in any domain. It is based on the very simple notions that higher order task actions:
are defined in terms of the next lower ones (creating hierarchy);
organize the next lower actions;
organize lower actions in a non-arbitrary way (differentiating them from simple chains of behavior).
It is cross-culturally and cross-species valid. The reason it applies cross-culturally is that the scoring is based on the mathematical complexity of the hierarchical organization of information. Scoring does not depend upon the content of the information (e.g., what is done, said, written, or analyzed) but upon how the information is organized.
The MHC is a non-mentalistic model of developmental stages. It specifies 16 orders of hierarchical complexity and their corresponding stages. It is different from previous proposals about developmental stage applied to humans; instead of attributing behavioral changes across a person's age to the development of mental structures or schema, this model posits that task sequences of task behaviors form hierarchies that become increasingly complex. Because less complex tasks must be completed and practiced before more complex tasks can be acquired, this accounts for the developmental changes seen in an individual persons' performance of complex tasks. For example, a person cannot perform arithmetic until the numeral representations of numbers are learned, or a person cannot operationally multiply the sums of numbers until addition is learned. However, as much as natural intelligence helps human to understand some numbers, it does not play a complete role in multiplying large numbers without learning additions.
The creators of the MHC claim that previous theories of stage have confounded the stimulus and response in assessing stage by simply scoring responses and ignoring the task or stimulus. The MHC separates the task or stimulus from the performance. The participant's performance on a task of a given complexity represents the stage of developmental complexity.
Previous stage theories were unsatisfying to Commons and Richards because the theories did not show the existence of the stages more than describing sequential changes in human behavior. This led them to create a list of two concepts they felt a successful developmental theory should address. The two ideas they wanted to study were (1) the hierarchical complexity of the task to be solved and (2) the psychology, sociology, and anthropology of the task performance (and the development of the performance).
Vertical complexity of tasks performed
One major basis for this developmental theory is task analysis. The study of ideal tasks, including their instantiation in the real world, has been the basis of the branch of stimulus control called psychophysics. Tasks are defined as sequences of contingencies, each presenting stimuli and each requiring a behavior or a sequence of behaviors that must occur in some non-arbitrary fashion. The complexity of behaviors necessary to complete a task can be specified using the horizontal complexity and vertical complexity definitions described below. Behavior is examined with respect to the analytically-known complexity of the task.
Tasks are quantal in nature. They are either completed correctly or not completed at all. There is no intermediate state (tertium non datur). For this reason, the model characterizes all stages as P-hard and functionally distinct. The orders of hierarchical complexity are quantized like the electron atomic orbitals around the nucleus: each task difficulty has an order of hierarchical complexity required to complete it correctly, analogous to the atomic Slater determinant. Since tasks of a given quantified order of hierarchical complexity require actions of a given order of hierarchical complexity to perform them, the stage of the participant's task performance is equivalent to the order of complexity of the successfully completed task. The quantal feature of tasks is thus particularly instrumental in stage assessment because the scores obtained for stages are likewise discrete.
Every task contains a multitude of subtasks. When the subtasks are carried out by the participant in a required order, the task in question is successfully completed. Therefore, the model asserts that all tasks fit in some configured sequence of tasks, making it possible to precisely determine the hierarchical order of task complexity. Tasks vary in complexity in two ways: either as horizontal (involving classical information); or as vertical (involving hierarchical information).
Horizontal complexity
Classical information describes the number of "yes–no" questions it takes to do a task. For example, if one asked a person across the room whether a penny came up heads when they flipped it, their saying "heads" would transmit 1 bit of "horizontal" information. If there were 2 pennies, one would have to ask at least two questions, one about each penny. Hence, each additional 1-bit question would add another bit. Let us say they had a four-faced top with the faces numbered 1, 2, 3, and 4. Instead of spinning it, they tossed it against a backboard as one does with dice in a game of craps. Again, there would be 2 bits. One could ask them whether the face had an even number. If it did, one would then ask if it were a 2. Horizontal complexity, then, is the sum of bits required by just such tasks as these.
Vertical complexity
Hierarchical complexity refers to the number of recursions that the coordinating actions must perform on a set of primary elements. Actions at a higher order of hierarchical complexity: (a) are defined in terms of actions at the next lower order of hierarchical complexity; (b) organize and transform the lower-order actions (see Figure 2); (c) produce organizations of lower-order actions that are qualitatively new and not arbitrary, and cannot be accomplished by those lower-order actions alone. Once these conditions have been met, we say the higher-order action coordinates the actions of the next lower order.
To illustrate how lower actions get organized into more hierarchically complex actions, let us turn to a simple example. Completing the entire operation 3 × (4 + 1) constitutes a task requiring the distributive act. That act non-arbitrarily orders adding and multiplying to coordinate them. The distributive act is therefore one order more hierarchically complex than the acts of adding and multiplying alone; it indicates the singular proper sequence of the simpler actions. Although simply adding results in the same answer, people who can do both display a greater freedom of mental functioning. Additional layers of abstraction can be applied. Thus, the order of complexity of the task is determined through analyzing the demands of each task by breaking it down into its constituent parts.
The hierarchical complexity of a task refers to the number of concatenation operations it contains, that is, the number of recursions that the coordinating actions must perform. An order-three task has three concatenation operations. A task of order three operates on one or more tasks of vertical order two and a task of order two operates on one or more tasks of vertical order one (the simplest tasks).
Stages of development
Stage theories describe human organismic and/or technological evolution as systems that move through a pattern of distinct stages over time. Here development is described formally in terms of the model of hierarchical complexity (MHC).
Formal definition of stage
Since actions are defined inductively, so is the function h, known as the order of the hierarchical complexity. To each action A, we wish to associate a notion of that action's hierarchical complexity, h(A). Given a collection of actions A and a participant S performing A, the stage of performance of S on A is the highest order of the actions in A completed successfully at least once, i.e., it is: stage (S, A) = max{h(A) | A ∈ A and A completed successfully by S}. Thus, the notion of stage is discontinuous, having the same transitional gaps as the orders of hierarchical complexity. This is in accordance with previous definitions.
Because MHC stages are conceptualized in terms of the hierarchical complexity of tasks rather than in terms of mental representations (as in Piaget's stages), the highest stage represents successful performances on the most hierarchically complex tasks rather than intellectual maturity.
Stages of hierarchical complexity
The following table gives descriptions of each stage in the MHC.
Relationship with Piaget's theory
The MHC builds on Piagetian theory but differs from it in many ways; notably the MHC has additional higher stages. In both theories, one finds:
Higher-order actions defined in terms of lower-order actions. This forces the hierarchical nature of the relations and makes the higher-order tasks include the lower ones and requires that lower-order actions are hierarchically contained within the relative definitions of the higher-order tasks.
Higher-order of complexity actions organize those lower-order actions. This makes them more powerful. Lower-order actions are organized by the actions with a higher order of complexity, i.e., the more complex tasks.
What Commons et al. (1998) have added includes:
Higher-order-of-complexity actions organize those lower-order actions in a non-arbitrary way.
This makes it possible for the model's application to meet real world requirements, including the empirical and analytic. Arbitrary organization of lower order of complexity actions, possible in the Piagetian theory, despite the hierarchical definition structure, leaves the functional correlates of the interrelationships of tasks of differential complexity formulations ill-defined.
Moreover, the model is consistent with the neo-Piagetian theories of cognitive development. According to these theories, progression to higher stages or levels of cognitive development is caused by increases in processing efficiency and working memory capacity. That is, higher-order stages place increasingly higher demands on these functions of information processing, so that their order of appearance reflects the information processing possibilities at successive ages.
The following dimensions are inherent in the application:
Task and performance are separated.
All tasks have an order of hierarchical complexity.
There is only one sequence of orders of hierarchical complexity.
Hence, there is structure of the whole for ideal tasks and actions.
There are transitional gaps between the orders of hierarchical complexity.
Stage is defined as the most hierarchically complex task solved.
There are discrete gaps in Rasch scaled stage of performance.
Performance stage is different task area to task area.
There is no structure of the whole—horizontal décalage—for performance. It is not inconsistency in thinking within a developmental stage. Décalage is the normal modal state of affairs.
Orders and corresponding stages
The MHC specifies 16 orders of hierarchical complexity and their corresponding stages, positing that each of Piaget's substages, in fact, are robustly hard stages. The MHC adds five postformal stages to Piaget's developmental trajectory: systematic stage 12, metasystematic stage 13, paradigmatic stage 14, cross-paradigmatic stage 15, and meta-cross-paradigmatic stage 16. It may be the Piaget's consolidate formal stage is the same as the systematic stage. The sequence is as follows: (0) calculatory, (1) automatic, (2) sensory & motor, (3) circular sensory-motor, (4) sensory-motor, (5) nominal, (6) sentential, (7) preoperational, (8) primary, (9) concrete, (10) abstract, (11) formal, and the five postformal: (12) systematic, (13) metasystematic, (14) paradigmatic, (15) cross-paradigmatic, and (16) meta-cross-paradigmatic. The first four stages (0–3) correspond to Piaget's sensorimotor stage at which infants and very young children perform. Adolescents and adults can perform at any of the subsequent stages. MHC stages 4 through 5 correspond to Piaget's pre-operational stage; 6 through 8 correspond to his concrete operational stage; and 9 through 11 correspond to his formal operational stage.
More complex behaviors characterize multiple system models. The four highest stages in the MHC are not represented in Piaget's model. The higher stages of the MHC have extensively influenced the field of positive adult development. Some adults are said to develop alternatives to, and perspectives on, formal operations; they use formal operations within a "higher" system of operations. Some theorists call the more complex orders of cognitive tasks "postformal thought", but other theorists argue that these higher orders cannot exactly be labelled as postformal thought.
Jordan (2018) argued that unidimensional models such as the MHC, which measure level of complexity of some behavior, refer to only one of many aspects of adult development, and that other variables are needed (in addition to unidimensional measures of complexity) for a fuller description of adult development.
Empirical research using the model
The MHC has a broad range of applicability. Its mathematical foundation permits it to be used by anyone examining task performance that is organized into stages. It is designed to assess development based on the order of complexity which the actor utilizes to organize information. The model thus allows for a standard quantitative analysis of developmental complexity in any cultural setting. Other advantages of this model include its avoidance of mentalistic explanations, as well as its use of quantitative principles which are universally applicable in any context.
The following practitioners can use the MHC to quantitatively assess developmental stages:
Cross-cultural developmentalists
Animal developmentalists
Evolutionary psychologists
Organizational psychologists
Developmental political psychologists
Learning theorists
Perception researchers
Historians of science
Educators
Therapists
Anthropologists
List of examples
In one representative study, Commons, Goodheart, and Dawson (1997) found, using Rasch analysis (Rasch, 1980), that hierarchical complexity of a given task predicts stage of a performance, the correlation being r = 0.92. Correlations of similar magnitude have been found in a number of the studies. The following are examples of tasks studied using the model of hierarchical complexity or Kurt W. Fischer's similar skill theory:
Algebra (Commons, Giri, & Harrigan, 2014)
Animal stages (Commons & Miller, 2004)
Atheism (Commons-Miller, 2005)
Attachment and loss (Commons, 1991; Miller & Lee, 2000)
Balance beam and pendulum (Commons, Goodheart, & Bresette, 1995; Commons, Giri, & Harrigan, 2014)
Contingencies of reinforcement (Commons & Giri, 2016)
Counselor stages (Lovell, 2002)
Empathy of hominids (Commons & Wolfsont, 2002)
Epistemology (Kitchener & Fischer, 1990; Kitchener & King, 1990)
Evaluative reasoning (Dawson, 2000)
Four story problem (Commons, Richards & Kuhn, 1982; Kallio & Helkama, 1991)
Good education (Dawson-Tunik, 2004)
Good interpersonal relations (Armon, 1984a; Armon, 1984b; Armon, 1989)
Good work (Armon, 1993)
Honesty and kindness (Lamborn, Fischer & Pipp, 1994)
Informed consent (Commons & Rodriguez, 1990; Commons & Rodriguez, 1993; Commons, Goodheart, Rodriguez, & Gutheil, 2006)
Language stages (Commons et al., 2007)
Leadership before and after crises (Oliver, 2004)
Loevinger's sentence completion task (Cook-Greuter, 1990)
Moral judgment (Armon & Dawson, 1997; Dawson, 2000)
Music (Beethoven) (Funk, 1989)
Physics tasks (Inhelder & Piaget, 1958)
Political development (Sonnert & Commons, 1994)
Report patient's prior crimes (Commons, Lee, Gutheil, et al., 1995)
Social perspective-taking (Commons & Rodriguez, 1990; Commons & Rodriguez, 1993)
Spirituality (Miller & Cook-Greuter, 1994)
Tool making of hominids (Commons & Miller 2002)
Views of the good life (Armon, 1984b; Danaher, 1993; Dawson, 2000; Lam, 1995)
Workplace culture (Commons, Krause, Fayer, & Meaney, 1993)
Workplace organization (Bowman, 1996)
As of 2014, people and institutes from all the major continents of the world, except Africa, have used the model of hierarchical complexity. Because the model is very simple and is based on analysis of tasks and not just performances, it is dynamic. With the help of the model, it is possible to quantify the occurrence and progression of transition processes in task performances at any order of hierarchical complexity.
Criticisms
The descriptions of stages 13–15 have been described as insufficiently precise.
See also
References
Literature
Biggs, J.B. & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy (structure of the observed learning outcome). New York: Academic Press.
Fischer, K.W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87(6), 477–531.
External links
Behavioral Development Bulletin
Society for Research in Adult Development
Cognition
Management cybernetics
Complex systems theory
Developmental stage theories
Psychophysics | Model of hierarchical complexity | [
"Physics"
] | 3,666 | [
"Psychophysics",
"Applied and interdisciplinary physics"
] |
11,912,671 | https://en.wikipedia.org/wiki/Israeli%20demolition%20of%20Palestinian%20property | Demolition of Palestinian property is a method Israel has used in the Israeli-occupied territories since they came under its control in the Six-Day War to achieve various aims. Broadly speaking, demolitions can be classified as either administrative, punitive/dissuasive and as part of military operations. The Israeli Committee Against House Demolitions estimated that Israel had razed 55,048 Palestinian structures as of 2022. In the first several months of the ongoing Israel–Hamas war, Israel further demolished over 2,000 Palestinian homes in the West Bank.
Administrative house demolitions are done to enforce building codes and regulations, which in the occupied Palestinian territories are set by the Israeli military. Critics claim that they are used as a means to Judaize parts of the occupied territory, especially East Jerusalem.
Punitive house demolitions involve demolishing houses of Palestinians or neighbors and relatives of Palestinians suspected of violent acts against Israelis. These target the homes where the suspects live. Proponents of the method claim that it deters violence while critics claim that it has not been proven effective and might even trigger more violence. Punitive house demolitions have been criticized by a Palestinian human rights organization as a form of collective punishment and thus a war crime under international law.
Method
Demolitions are carried out by the Israeli Army Combat Engineering Corps using armored bulldozers, usually Caterpillar D9, but also with excavators (for high multi-story buildings) and wheel loaders (for small houses with low risk) modified by the IDF. The heavily armored IDF Caterpillar D9 is often used when there is a risk demolishing the building (such as when armed insurgents are barricaded inside or the structure is rigged with explosives and booby traps). Multi-story buildings, flats, and explosive laboratories are demolished by explosive devices, set by IDF demolition experts of Yaalom's Sayeret Yael. Amnesty International has also described house demolitions that were carried out by the IDF using "powerful explosive charges".
Administrative demolition
Some house demolitions are allegedly performed because the houses may have been built without permits, or are in violation of various building codes, ordinances, or regulations. Amnesty International claims that Israeli authorities are in fact systematically denying building permit requests in Arab areas as a means of appropriating land. This is disputed by Israeli sources, who claim that both Arabs and Jews enjoy a similar rate of application approvals.
Dr. Meir Margalit of Israeli Committee Against House Demolitions writes:
"The thinking is that a national threat calls for a national response, invariably aggressive. Accordingly, a Jewish house without a permit is an urban problem; but a Palestinian home without a permit is a strategic threat. A Jew building without a permit is 'cocking a snook at the law'; a Palestinian doing the same is defying Jewish sovereignty over Jerusalem."
Punitive demolition
Although revoked by the British the Mandatory Palestine Defence (Emergency) Regulations were adopted by Israel on its formation. These regulations gave authority to military commanders to confiscate and raze "any house, structure or land... the inhabitants of which he is satisfied have committed... any offence against these Regulations involving violence."
In 1968, after Israel occupied the West Bank and Gaza, Theodor Meron, then legal adviser to the Israeli Foreign Ministry, advised the Prime Minister's office in a top secret memorandum that house demolitions, even of suspected terrorists' residences, violated the 1949 Fourth Geneva Convention on the protection of civilians in war. Undertaking such measures, as though they were in continuity with British mandatory emergency regulations, might be useful as hasbara but were "legally unconvincing". The advice was ignored. His view, according to Gershom Gorenberg, is shared by nearly all scholars of international law, prominent Israeli experts included. The practice of demolishing Palestinian houses began within two days of the conquest of the area in the Old City of Jerusalem known as the Moroccan Quarter, adjacent to the Western Wall. One of the first measures adopted, without legal authorization, on the conquest of Jerusalem in 1967 was to evict 650 Palestinians from their homes in the heart of Jerusalem, and reduce their homes and shrines to rubble in order to make way for the construction of the plaza. From the outset of the occupation of the Palestinian territories up to 2019, according to an estimate by the ICAHD, Israel has razed 49,532 Palestinian structures, with a concomitant displacement of hundreds of thousands of Palestinians.
Israel regards its practice as a form of deterrence of terrorism, since a militant is thereby forced to consider the effect of his actions on his family. Before the First Intifada, the measure was considered to be used only in exceptional circumstances, but with that uprising it became commonplace, no longer requiring the Defense Minister's approval but a measure left to the discretion of regional commanders. Israel demolished 103 houses in 1987; the following year the number rose to 423. 510 Palestinian homes of men alleged to be involved in or convicted of security offenses, or because the homes were said to function as screens for actions hostile to the Israeli army or settlers, were demolished. A further 110 were shelled in the belief armed men were inside, and overall another 1,497 were razed for lacking Israeli building permits, leaving an estimated 10,000 children homeless. Between September 2000 and the end of 2004, of the 4,100 homes the IDF razed in the territories, 628, housing 3,983 people, were undertaken as punishment because a member of a family had been involved in the Second Intifada. From 2006 until 31 August 2018, Israel demolished at least 1,360 Palestinian residential units in the West Bank (not including East Jerusalem), causing 6,115 people – including at least 3,094 minors – to lose their homes. 698 of these, homes to 2,948 Palestinians of whom 1,334 were minors, were razed in the Jordan Valley (January 2006–September 2017). Violations of building codes are a criminal offense in Israeli law, and this was only extended to the West Bank in 2007. Israel has demolished or compelled the owners to demolish, 1097 homes in East Jerusalem between 2004 and 2020, leaving 3,579 people of whom 1,899 minors, homeless. The number of homes demolished in the rest of the West Bank from 2006 until 30 September 2018 is estimated to be at least 1,373, resulting in homelessness for 6,133 Palestinians, including 3,103 minors. No settler has ever been prosecuted for engaging in such infractions, and only 3% of reported violations by settlers have led to demolitions. Even huts by shepherds, on which taxes have been duly paid, can be demolished.
During the Second Intifada, the IDF adopted a policy of house demolition following a wave of suicide bombings. Israel justified the policy on the basis of deterrence against terrorism, and providing an incentive for families of potential suicide bombers to dissuade the bomber from attacking. Demolitions can also occur in the course of fighting. During Operation Defensive Shield, several IDF soldiers were killed early in the conflict while searching houses containing militants. In response, the IDF started employing a tactic of surrounding such houses, calling on the occupants (civilian and militant) to exit, and demolishing the house on top of the militants that do not surrender. This tactic, called nohal sir lachatz (), is now used whenever feasible (i.e., non-multi rise building that is separated from other houses). In some heavy fighting incidents, especially in the 2002 Battle of Jenin and Operation Rainbow in Rafah 2004, heavily armored IDF Caterpillar D9 bulldozers were used to demolish houses to widen alleyways, uncover tunnels, or to secure locations for IDF troops. The result was an indiscriminate use of demolitions against civilian housing unconnected to terrorism that left 1,000 people homeless in the Rafah Refugee Camp.
According to a report by Amnesty International in 1999, house demolitions are usually done without prior warning and the home's inhabitants are given little time to evacuate. According to a 2004 Human Rights Watch report, many families in Rafah own a "cluster of homes". For example, the family may own a "small house from earlier days in the camp, often with nothing more than an asbestos roof". Later, sons will build homes nearby when they start their own families.
In February 2005, the Ministry of Defense ordered an end to the demolition of houses for the purpose of punishing the families of suicide bombers unless there is "an extreme change in circumstances". However, house demolitions continue for other reasons.
In 2009, after a string of fatal attacks by Palestinians against Israelis in Jerusalem, the Israeli High Court of Justice ruled in favor of the IDF to seal with cement the family homes of Palestinian terrorists as a deterrent against terrorism.
As a punitive measure, one study by a Northwestern and Hebrew University group concluded that prompt demolitions brought about a lowering of suicide attacks for a month and that they are an effective deterrent against terrorism. They are related to the identity of the house's owner, and result in a "significant decrease" of Palestinian terrorists attacks. Conversely, an internal IDF report of 2005, analyzing the effectiveness of the policy during the Second Intifada in which 3,000 civilian homes were demolished, found that terror attacks increased after house demolitions, only stimulated hatred of Israel, the damage caused outweighed any benefits, and recommended the practice be dropped.
Amnesty International has criticized the lack of due process in the use of house demolitions by Israel. Many demolitions are carried out with no warning or opportunity for the householder to appeal. In 2002, a proposed demolition case was appealed to the Israeli Supreme Court who ruled that there must be a right to appeal unless doing so would "endanger the lives of Israelis or if there are combat activities in the vicinity." In a later ruling, the Supreme Court decided that demolitions without advanced warning or due process can be carried out if advance notice would hinder demolition. Amnesty describes this as "a virtual green light" to demolition with no warning.
Palestinian identity is deeply impregnated with the sense of national loss and place engendered by the Nakba, and according to physicians studying West Bankers who have had their homes destroyed, such events cause a retraumatization of the Nakba in the families affected.
On 8 July 2021, Israeli army forces demolished a luxurious mansion in Turmus Ayya which was the family home of Sanaa Shalabi, who lived alone there with three of her seven children. She was the estranged wife of Muntasir Shalabi, a Palestinian-American who murdered an Israeli citizen in May. The wife has been separated from Muntasir since 2008, and her husband had married three other women in the meantime, and stayed in the home two months every year for family visits. The U.S. Embassy in Israel stated that "the home of an entire family should not be demolished for the actions of one individual." Gideon Levy called this demolition an instance of apartheid since Jewish terrorists never have their family homes destroyed.
Statistics
At least 741 Palestinians in the occupied West Bank and East Jerusalem were made homeless between January and 30 September 2020 due to demolitions, according to data compiled by Israeli rights group B'tselem.
As of August 23, 2020, 89 residential units were demolished in East Jerusalem, compared to 104 in 2019 and 72 in 2018. In the first three weeks of August, 24 homes were demolished.
The Palestinian village Aqabah, located in the northeastern West Bank, is threatened by demolition orders issued by the Israeli Civil Administration against the entire village. The Civil Administration had previously expropriated large areas of privately registered land in the village, and as of May 2008 it has threatened to demolish the following structures: the mosque, the British government-funded medical clinic, the internationally funded kindergarten, the Rural Women's Association building, the roads, the water tank, and nearly all private homes. According to the Rebuilding Alliance, a California-based organization that opposes house demolitions, Haj Sami Sadek, the mayor of the village, has circulated an open letter asking for assistance. Gush Shalom, the Israeli Peace Bloc, and the Israeli Committee Against House Demolitions are said to be supporting the campaign.
Recent conflicts
House demolition has been used in an on-again-off-again fashion by the Israeli government during the Second Intifada. More than 3,000 homes have been destroyed in this way. House demolition was used to destroy the family homes of Saleh Abdel Rahim al-Souwi, perpetrator of the Tel Aviv bus 5 massacre, and Yahya Ayyash, Hamas's chief bomb maker, known as "the engineer", as well as the perpetrators of the first and second Jerusalem bus 18 massacres, and the Ashkelon bus station bombing.
According to Peace Now, approvals for building in Israeli settlements in East Jerusalem has expanded by 60% since Trump became US president in 2017. Since 1991, Palestinians who make up the majority of the residents in the area have only received 30% of the building permits.
Area C
According to B'tselem, since the 1993 Oslo Accords Israel has issued over 14,600 demolition orders for Palestinian infrastructure, of which it has destroyed roughly 2,925. In the period 2000-2012, Palestinian were given only 211 permits to build, from 2009 to 2012, only 27 permits were given. In 2014, according to Ma'an News Agency, citing Bimkom, only one such permit was issued.
On 7 July 2021, the Norwegian Refugee Council (NRC) said Israel declared Humsa al-Bqai'a a "closed military area" and blocked access for international observers. The NRC said that Israeli authorities must "immediately halt attempts to forcibly transfer around 70 Palestinians, including 35 children" following the Bedouin community's property being demolished for the seventh time since November 2020.
Legal status
The use of house demolition under international law is today governed by the Fourth Geneva Convention, enacted in 1949, which protects non-combatants in occupied territories. Article 53 provides that "Any destruction by the Occupying Power of real or personal property belonging individually or collectively to private persons ... is prohibited, except where such destruction is rendered absolutely necessary by military operations." House demolition is considered a form of collective punishment. According to the law of occupation, the destruction of property, save for reasons of absolute military necessity, is prohibited.
However, Israel, which is a party to the Fourth Geneva Convention, asserts that the terms of the Convention are not applicable to the Palestinian territories on the grounds that the territories do not constitute a state which is a party to the Fourth Geneva Convention. This position is rejected by human rights organisations such as Amnesty International, which notes that "it is a basic principle of human rights law that international human rights treaties are applicable in all areas in which states parties exercise effective control, regardless of whether or not they exercise sovereignty in that area."
Justification and criticism
Justification
In May 2004, the Israeli Foreign Ministry publicly stated:
"...other means employed by Israel against terrorists is the demolition of homes of those who have carried out suicide attacks or other grave attacks, or those who are responsible for sending suicide bombers on their deadly missions. Israel has few available and effective means in its war against terrorism. This measure is employed to provide effective deterrence of the perpetrators and their dispatchers, not as a punitive measure. This practice has been reviewed and upheld by the High Court of Justice"
House demolition is typically justified by the IDF on the basis of:
Deterrence, achieved by deterring the relatives of those who carry out, or are suspected of involvement in carrying out, attacks. Benmelech, Berrebi and Klor call demolitions of this type, targeting the homes of terror operatives "punitive demolitions".
The following types are labelled as "precautionary demolitions" by Benmelech, Berrebi and Klor, however punishing they may feel to the impacted families.
Counter-terrorism, by destroying militant facilities such as bomb laboratories, weapons factories, weapons and ammunition warehouses, headquarters, offices, and others.
Forcing out an individual barricaded inside a house, which may be rigged with explosives, without risking soldiers' lives.
Self-defence, by destroying possible hideouts and rocket-propelled grenade or gun posts.
Combat engineering, clearing a path for tanks and armoured personnel carriers.
Destroying structures rigged with booby traps and explosives in order to prevent risk to soldiers and civilians.
Israeli historian Yaacov Lozowick, however, implied that there is a moral basis for demolishing the houses of families of suicide bombers, stating:
Criticism
United Nations agencies and human rights groups such as Amnesty International and the International Committee of the Red Cross who oppose the house demolitions reject the IDF's claims, and document numerous instances where they argue the IDF's claims do not apply. They accuse the Israeli government and IDF of other motives:
Collective punishment, the punishment of an innocent Palestinian "for an offence he or she has not personally committed."
Taking over West Bank Palestinian land by annexation to build the Israeli West Bank barrier or to create, expand or otherwise benefit Israeli settlements.
In 2004, Human Rights Watch published the report Razing Rafah: Mass Home Demolitions in the Gaza Strip. The report documented what it described as a "pattern of illegal demolitions" by the IDF in Rafah, a refugee camp and city at the southern end of the Gaza Strip on the border with Egypt where sixteen thousand people lost their homes after the Israeli government approved a plan to expand the de facto "buffer zone" in May 2004. The IDF's main stated rationales for the demolitions were responding to and preventing attacks on its forces and the suppression of weapons smuggling through tunnels from Egypt.
The effectiveness of house demolitions as a deterrence has been questioned. In 2005, an Israeli Army commission to study house demolitions found no proof of effective deterrence and concluded that the damage caused by the demolitions overrides its effectiveness. As a result, the IDF approved the commission's recommendations to end punitive demolitions of Palestinian houses.
A number of human rights organizations, including Human Rights Watch and the ICAHD, oppose the practice. Human Rights Watch has argued that the practice violates international laws against collective punishment, the destruction of private property, and the use of force against civilians. According to Amnesty International, "The destruction of Palestinian homes, agricultural land and other property in the Occupied Territories, including East Jerusalem, is inextricably linked with Israel's long-standing policy of appropriating as much as possible of the land it occupies, notably by establishing Israeli settlements." In October 1999, during the "Peace Process" and before the start of the Second Intifada, Amnesty International wrote that: "well over one third of the Palestinian population of East Jerusalem live under threat of having their house demolished. ... Threatened houses exist in almost every street and it is probable that the great majority of Palestinians live in or next to a house due for demolition."
"House demolitions ostensibly occur because the homes are built 'illegally' – i.e. without a permit. Officials and spokespersons of the Israeli government have consistently maintained that the demolition of Palestinian houses is based on planning considerations and is carried out according to the law. ... But the Israeli policy has been based on discrimination. Palestinians are targeted for no other reasons than that they are Palestinians. ... [Israel has] discriminated in the application of the law, strictly enforcing planning prohibitions where Palestinian houses are built and freely allowing amendments to the plans to promote development where Israelis are setting up settlements."
In May 2008, a UN agency said that thousands of Palestinians in the occupied West Bank risk being displaced as the Israeli authorities threaten to tear down their homes and in some cases entire communities. "To date, more than 3,000 Palestinian-owned structures in the West Bank have pending demolition orders, which can be immediately executed without prior warning," the UN Office for Coordination of Humanitarian Affairs said in a report.
Supreme Court justice Menachem Mazuz, who retired from the court in April 2021, was one of the few justices who opposed house demolitions due to the actions of a family member. He told Haaretz thatI went against the stream explicitly and consciously. I considered demolishing homes to be immoral, contrary to the law and of dubious effectiveness. My feeling was that it was done to placate public opinion, and that the leadership, too, is aware that this is not what will prevent the next act of terror."
In 2009, the then US Secretary of State Hillary Clinton criticized the Israeli government's plans to demolish Palestinian homes in East Jerusalem, calling the action a violation of international obligations.
Commentary and analysis
A January 2015 efficacy study by Efraim Benmelech, Berrebi and Klor distinguishes between "punitive demolitions", in which homes belonging to the families of terror operatives are demolished, and "precautionary demolitions", such as the demolition of a house well-positioned for use by Palestinian snipers. Their results, which The New Republic calls "politically explosive," indicate that "precautionary demolitions" have caused suicide attacks to increase, a "48.7 percent increase in the number of suicide terrorists from an average district," while in the months immediately following a demolition, punitive demolitions caused terror attacks to decline by between 11.7 and 14.9 percent. However, Klor later described the effect of punitive demolitions as "small, localized and diminish[ing] over time" and suggested that the real reason they were carried out was "to placate the Israeli public".
See also
Bulldozer politics
Destruction of cultural heritage during the Israeli invasion of the Gaza Strip
Destruction of cultural heritage by the Islamic State
Forced displacement
Internally displaced person
2014 Israel–Gaza conflict § Destruction of homes
Roof knocking
Notes
Citations
Sources
*
Further reading
External links
Israeli Committee Against House Demolitions
B'Tselem - Statistics on demolition of houses built without permits in the West Bank (excluding East Jerusalem)
The Rebuilding Alliance
A Layman's Guide to Home Demolitions in East Jerusalem: An Ir Amim Report
The Civic Coalition for Palestinian Rights in Jerusalem: Latest information about home demolition in Jerusalem
Israeli–Palestinian conflict
Nakba
Human rights abuses in Israel
Human rights abuses in the State of Palestine
History of the Palestinian refugees
Destruction of buildings
Counterinsurgency
Demolition
Collective punishment
Attacks on buildings and structures in Israel
Attacks on buildings and structures in the State of Palestine | Israeli demolition of Palestinian property | [
"Engineering"
] | 4,654 | [
"Construction",
"Destruction of buildings",
"Architecture",
"Demolition"
] |
11,912,808 | https://en.wikipedia.org/wiki/Bottleneck%20%28software%29 | In software engineering, a bottleneck occurs when the capacity of an application or a computer system is limited by a single component, like the neck of a bottle slowing down the overall water flow. The bottleneck has the lowest throughput of all parts of the transaction path.
System designers try to avoid bottlenecks through direct effort towards locating and tuning existing bottlenecks in a software application. Some examples of engineering bottlenecks that appear include the following: a processor, a communication link, and disk IO. A system or application will hit a bottleneck if the work arrives at a comparatively faster pace relative to other processing components. According to the theory of constraints, improving on the occurrences of hot-spot point of the bottleneck constraint improves the overall processing speed of the software. A thought-provoking stipulation of the theory reveals that improving the efficiency of a particular process stage rather than the constraint can generate even more delay and decrease overall processing capabilities of a software.
It is impossible to remove bottlenecks completely since there is always a component that limits the overall performance, so the usual goal is to improve the bottleneck component so that the whole system can achieve the desired performance.
The process of tracking down bottlenecks (also referred as "hot spots" - sections of the code that execute most frequently - i.e. have the highest execution count) is called performance analysis. Reduction is achieved with the utilization of specialized tools such as performance analyzers or profilers, the objective being to make particular sections of code perform as effectively as possible to improve overall algorithmic efficiency.
See also
Performance engineering
Profiling (computer programming)
Program optimization
References
Software optimization
Software performance management | Bottleneck (software) | [
"Technology",
"Engineering"
] | 338 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering"
] |
11,913,227 | https://en.wikipedia.org/wiki/Artificial%20gene%20synthesis | Artificial gene synthesis, or simply gene synthesis, refers to a group of methods that are used in synthetic biology to construct and assemble genes from nucleotides de novo. Unlike DNA synthesis in living cells, artificial gene synthesis does not require template DNA, allowing virtually any DNA sequence to be synthesized in the laboratory. It comprises two main steps, the first of which is solid-phase DNA synthesis, sometimes known as DNA printing. This produces oligonucleotide fragments that are generally under 200 base pairs. The second step then involves connecting these oligonucleotide fragments using various DNA assembly methods. Because artificial gene synthesis does not require template DNA, it is theoretically possible to make a completely synthetic DNA molecule with no limits on the nucleotide sequence or size.
Synthesis of the first complete gene, a yeast tRNA, was demonstrated by Har Gobind Khorana and coworkers in 1972. Synthesis of the first peptide- and protein-coding genes was performed in the laboratories of Herbert Boyer and Alexander Markham, respectively. More recently, artificial gene synthesis methods have been developed that will allow the assembly of entire chromosomes and genomes. The first synthetic yeast chromosome was synthesised in 2014, and entire functional bacterial chromosomes have also been synthesised. In addition, artificial gene synthesis could in the future make use of novel nucleobase pairs (unnatural base pairs).
Standard methods for DNA synthesis
Oligonucleotide synthesis
Oligonucleotides are chemically synthesized using building blocks called nucleoside phosphoramidites. These can be normal or modified nucleosides which have protecting groups to prevent their amines, hydroxyl groups and phosphate groups from interacting incorrectly. One phosphoramidite is added at a time, the 5' hydroxyl group is deprotected and a new base is added and so on. The chain grows in the 3' to 5' direction, which is backwards relative to biosynthesis. At the end, all the protecting groups are removed. Nevertheless, being a chemical process, several incorrect interactions occur leading to some defective products. The longer the oligonucleotide sequence that is being synthesized, the more defects there are, thus this process is only practical for producing short sequences of nucleotides. The current practical limit is about 200 bp (base pairs) for an oligonucleotide with sufficient quality to be used directly for a biological application. HPLC can be used to isolate products with the proper sequence. Meanwhile, a large number of oligos can be synthesized in parallel on gene chips. For optimal performance in subsequent gene synthesis procedures they should be prepared individually and in larger scales.
Annealing based connection of oligonucleotides
Usually, a set of individually designed oligonucleotides is made on automated solid-phase synthesizers, purified and then connected by specific annealing and standard ligation or polymerase reactions. To improve specificity of oligonucleotide annealing, the synthesis step relies on a set of thermostable DNA ligase and polymerase enzymes. To date, several methods for gene synthesis have been described, such as the ligation of phosphorylated overlapping oligonucleotides, the Fok I method and a modified form of ligase chain reaction for gene synthesis. Additionally, several PCR assembly approaches have been described. They usually employ oligonucleotides of 40-50 nucleotides length that overlap each other. These oligonucleotides are designed to cover most of the sequence of both strands, and the full-length molecule is generated progressively by overlap extension (OE) PCR, thermodynamically balanced inside-out (TBIO) PCR or combined approaches. The most commonly synthesized genes range in size from 600 to 1,200 bp although much longer genes have been made by connecting previously assembled fragments of under 1,000 bp. In this size range it is necessary to test several candidate clones confirming the sequence of the cloned synthetic gene by automated sequencing methods.
Limitations
Moreover, because the assembly of the full-length gene product relies on the efficient and specific alignment of long single stranded oligonucleotides, critical parameters for synthesis success include extended sequence regions comprising secondary structures caused by inverted repeats, extraordinary high or low GC-content, or repetitive structures. Usually these segments of a particular gene can only be synthesized by splitting the procedure into several consecutive steps and a final assembly of shorter sub-sequences, which in turn leads to a significant increase in time and labor needed for its production.
The result of a gene synthesis experiment depends strongly on the quality of the oligonucleotides used. For these annealing based gene synthesis protocols, the quality of the product is directly and exponentially dependent on the correctness of the employed oligonucleotides. Alternatively, after performing gene synthesis with oligos of lower quality, more effort must be made in downstream quality assurance during clone analysis, which is usually done by time-consuming standard cloning and sequencing procedures.
Another problem associated with all current gene synthesis methods is the high frequency of sequence errors because of the usage of chemically synthesized oligonucleotides. The error frequency increases with longer oligonucleotides, and as a consequence the percentage of correct product decreases dramatically as more oligonucleotides are used.
The mutation problem could be solved by shorter oligonucleotides used to assemble the gene. However, all annealing based assembly methods require the primers to be mixed together in one tube. In this case, shorter overlaps do not always allow precise and specific annealing of complementary primers, resulting in the inhibition of full length product formation.
Manual design of oligonucleotides is a laborious procedure and does not guarantee the successful synthesis of the desired gene. For optimal performance of almost all annealing based methods, the melting temperatures of the overlapping regions are supposed to be similar for all oligonucleotides. The necessary primer optimisation should be performed using specialized oligonucleotide design programs. Several solutions for automated primer design for gene synthesis have been presented so far.
Error correction procedures
To overcome problems associated with oligonucleotide quality several elaborate strategies have been developed, employing either separately prepared fishing oligonucleotides, mismatch binding enzymes of the mutS family or specific endonucleases from bacteria or phages. Nevertheless, all these strategies increase time and costs for gene synthesis based on the annealing of chemically synthesized oligonucleotides.
Massively parallel sequencing has also been used as a tool to screen complex oligonucleotide libraries and enable the retrieval of accurate molecules. In one approach, oligonucleotides are sequenced on the 454 pyrosequencing platform and a robotic system images and picks individual beads corresponding to accurate sequence. In another approach, a complex oligonucleotide library is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by dial-out PCR.
Increasingly, genes are ordered in sets including functionally related genes or multiple sequence variants on a single gene. Virtually all of the therapeutic proteins in development, such as monoclonal antibodies, are optimised by testing many gene variants for improved function or expression.
Unnatural base pairs
While traditional nucleic acid synthesis only uses 4 base pairs - adenine, thymine, guanine and cytosine, oligonucleotide synthesis in the future could incorporate the use of unnatural base pairs, which are artificially designed and synthesized nucleobases that do not occur in nature.
In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed, and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate the plasmid containing d5SICS–dNaM.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. In the future, these unnatural base pairs could be synthesised and incorporated into oligonucleotides via DNA printing methods.
DNA assembly
DNA printing can thus be used to produce DNA parts, which are defined as sequences of DNA that encode a specific biological function (for example, promoters, transcription regulatory sequences or open reading frames). However, because oligonucleotide synthesis typically cannot accurately produce oligonucleotides sequences longer than a few hundred base pairs, DNA assembly methods have to be employed to assemble these parts together to create functional genes, multi-gene circuits or even entire synthetic chromosomes or genomes. Some DNA assembly techniques only define protocols for joining DNA parts, while other techniques also define the rules for the format of DNA parts that are compatible with them. These processes can be scaled up to enable the assembly of entire chromosomes or genomes. In recent years, there has been proliferation in the number of different DNA assembly standards with 14 different assembly standards developed as of 2015, each with their pros and cons. Overall, the development of DNA assembly standards has greatly facilitated the workflow of synthetic biology, aided the exchange of material between research groups and also allowed for the creation of modular and reusable DNA parts.
The various DNA assembly methods can be classified into three main categories – endonuclease-mediated assembly, site-specific recombination, and long-overlap-based assembly. Each group of methods has its distinct characteristics and their own advantages and limitations.
Endonuclease-mediated assembly
Endonucleases are enzymes that recognise and cleave nucleic acid segments and they can be used to direct DNA assembly. Of the different types of restriction enzymes, the type II restriction enzymes are the most commonly available and used because their cleavage sites are located near or in their recognition sites. Hence, endonuclease-mediated assembly methods make use of this property to define DNA parts and assembly protocols.
BioBricks
The BioBricks assembly standard was described and introduced by Tom Knight in 2003 and it has been constantly updated since then. Currently, the most commonly used BioBricks standard is the assembly standard 10, or BBF RFC 10. BioBricks defines the prefix and suffix sequences required for a DNA part to be compatible with the BioBricks assembly method, allowing the joining of all DNA parts which are in the BioBricks format.
The prefix contains the restriction sites for EcoRI, NotI and XBaI, while the suffix contains the SpeI, NotI and PstI restriction sites. Outside of the prefix and suffix regions, the DNA part must not contain these restriction sites. To join two BioBrick parts together, one of the plasmids is digested with EcoRI and SpeI while the second plasmid is digested with EcoRI and XbaI. The two EcoRI overhangs are complementary and will thus anneal together, while SpeI and XbaI also produce complementary overhangs which can also be ligated together. As the resulting plasmid contains the original prefix and suffix sequences, it can be used to join with more BioBricks parts. Because of this property, the BioBricks assembly standard is said to be idempotent in nature. However, there will also be a "scar" sequence (either TACTAG or TACTAGAG) formed between the two fused BioBricks. This prevents BioBricks from being used to create fusion proteins, as the 6bp scar sequence codes for a tyrosine and a stop codon, causing translation to be terminated after the first domain is expressed, while the 8bp scar sequence causes a frameshift, preventing continuous readthrough of the codons. To offer alternative scar sequences that for example give a 6bp scar, or scar sequences that do not contain stop codons, other assembly standards such as the BB-2 Assembly, BglBricks Assembly, Silver Assembly and the Freiburg Assembly were designed.
While the easiest method to assemble BioBrick parts is described above, there also exist several other commonly used assembly methods that offer several advantages over the standard assembly. The 3 antibiotic (3A) assembly allows for the correct assembly to be selected via antibiotic selection, while the amplified insert assembly seeks to overcome the low transformation efficiency seen in 3A assembly.
The BioBrick assembly standard has also served as inspiration for using other types of endonucleases for DNA assembly. For example, both the iBrick standard and the HomeRun vector assembly standards employ homing endonucleases instead of type II restriction enzymes.
Type IIs restriction endonuclease assembly
Some assembly methods also make use of type IIs restriction endonucleases. These differ from other type II endonucleases as they cut several base pairs away from the recognition site. As a result, the overhang sequence can be modified to contain the desired sequence. This provides Type IIs assembly methods with two advantages – it enables "scar-less" assembly, and allows for one-pot, multi-part assembly. Assembly methods that use type IIs endonucleases include Golden Gate and its associated variants.
Golden Gate cloning
The Golden Gate assembly protocol was defined by Engler et al. 2008 to define a DNA assembly method that would give a final construct without a scar sequence, while also lacking the original restriction sites. This allows the protein to be expressed without containing unwanted protein sequences which could negatively affect protein folding or expression. By using the BsaI restriction enzyme that produces a 4 base pair overhang, up to 240 unique, non-palindromic sequences can be used for assembly.
Plasmid design and assembly
In Golden Gate cloning, each DNA fragment to be assembled is placed in a plasmid, flanked by inward facing BsaI restriction sites containing the programmed overhang sequences. For each DNA fragment, the 3' overhang sequence is complementary to the 5' overhang of the next downstream DNA fragment. For the first fragment, the 5' overhang is complementary to the 5' overhang of the destination plasmid, while the 3' overhang of the final fragment is complementary to the 3' overhang of the destination plasmid. Such a design allows for all DNA fragments to be assembled in a one-pot reaction (where all reactants are mixed together), with all fragments arranged in the correct sequence. Successfully assembled constructs are selected by detecting the loss of function of a screening cassette that was originally in the destination plasmid.
MoClo and Golden Braid
The original Golden Gate Assembly only allows for a single construct to be made in the destination vector . To enable this construct to be used in a subsequent reaction as an entry vector, the MoClo and Golden Braid standards were designed.
The MoClo standard involves defining multiple tiers of DNA assembly:
Tier 1: Tier 1 assembly is the standard Golden Gate assembly, and genes are assembled from their components parts (DNA parts coding for genetic elements like UTRs, promoters, ribosome binding sites or terminator sequences). Flanking the insertion site of the tier 1 destination vectors are a pair of inward cutting BpiI restriction sites. This allows these plasmids to be used as entry vectors for tier two destination vectors.
Tier 2: Tier 2 assembly involves further assembling the genes assembled in tier 1 assembly into multi-gene constructs. If there is a need for further, higher tier assembly, inward cutting BsaI restriction sites can be added to flank the insertion sites. These vectors can then be used as entry vectors for higher tier constructs.
Each assembly tier alternates the use of BsaI and BpiI restriction sites to minimise the number of forbidden sites, and sequential assembly for each tier is achieved by following the Golden Gate plasmid design. Overall, the MoClo standard allows for the assembly of a construct that contains multiple transcription units, all assembled from different DNA parts, by a series of one-pot Golden Gate reactions. However, one drawback of the MoClo standard is that it requires the use of 'dummy parts' with no biological function, if the final construct requires less than four component parts. The Golden Braid standard on the other hand introduced a pairwise Golden Gate assembly standard.
The Golden Braid standard uses the same tiered assembly as MoClo, but each tier only involves the assembly of two DNA fragments, i.e. a pairwise approach. Hence in each tier, pairs of genes are cloned into a destination fragment in the desired sequence, and these are subsequently assembled two at a time in successive tiers. Like MoClo, the Golden Braid standard alternates the BsaI and BpiI restriction enzymes between each tier.
The development of the Golden Gate assembly methods and its variants has allowed researchers to design tool-kits to speed up the synthetic biology workflow. For example, EcoFlex was developed as a toolkit for E. Coli that uses the MoClo standard for its DNA parts, while a similar toolkit has also been developed for engineering the Chlamydomonas reinhardtii microalgae.
Site-specific recombination
Site-specific recombination makes use of phage integrases instead of restriction enzymes, eliminating the need for having restriction sites in the DNA fragments. Instead, integrases make use of unique attachment (att) sites, and catalyse DNA rearrangement between the target fragment and the destination vector. The Invitrogen Gateway cloning system was invented in the late 1990s and uses two proprietary enzyme mixtures, BP clonase and LR clonase. The BP clonase mix catalyses the recombination between attB and attP sites, generating hybrid attL and attR sites, while the LR clonase mix catalyse the recombination of attL and attR sites to give attB and attP sites. As each enzyme mix recognises only specific att sites, recombination is highly specific and the fragments can be assembled in the desired sequence.
Vector design and assembly
Because Gateway cloning is a proprietary technology, all Gateway reactions must be carried out with the Gateway kit that is provided by the manufacturer. The reaction can be summarised into two steps. The first step involves assembling the entry clones containing the DNA fragment of interest, while the second step involves inserting this fragment of interest into the destination clone.
Entry clones must be made using the supplied "Donor" vectors containing a Gateway cassette flanked by attP sites. The Gateway cassette contains a bacterial suicide gene (e.g. ccdB) that will allow for survival and selection of successfully recombined entry clones. A pair of attB sites are added to flank the DNA fragment of interest, and this will allow recombination with the attP sites when the BP clonase mix is added. Entry clones are produced, and the fragment of interest is flanked by attL sites.
The destination vector also comes with a Gateway cassette, but is instead flanked by a pair of attR sites. Mixing this destination plasmid with the entry clones and the LR clonase mix will allow for recombination to occur between the attR and attL sites. A destination clone is produced, with the fragment of interest successfully inserted. The lethal gene is inserted into the original vector, and bacteria transformed with this plasmid will die. The desired vector can thus be easily selected.
The earliest iterations of the Gateway cloning method only allowed for only one entry clone to be used for each destination clone produced. However, further research revealed that four more orthogonal att sequences could be generated, allowing for the assembly of up to four different DNA fragments, and this process is now known as the Multisite Gateway technology.
Besides Gateway cloning, non-commercial methods using other integrases have also been developed. For example, the Serine Integrase Recombinational Assembly (SIRA) method uses the ϕC31 integrase, while the Site-Specific Recombination-based Tandem Assembly (SSRTA) method uses the Streptomyces phage φBT1 integrase. Other methods, like the HomeRun Vector Assembly System (HVAS), build on the Gateway cloning system and further incorporate homing endonucleases to design a protocol that could potentially support the industrial synthesis of synthetic DNA constructs.
Long-overlap-based assembly
There have been a variety of long-overlap-based assembly methods developed in recent years. One of the most commonly used methods, the Gibson assembly method, was developed in 2009, and provides a one-pot DNA assembly method that does not require the use of restriction enzymes or integrases. Other similar overlap-based assembly methods include Circular Polymerase Extension Cloning (CPEC), Sequence and Ligase Independent Cloning (SLIC) and Seamless Ligation Cloning Extract (SLiCE). Despite the presence of many overlap assembly methods, the Gibson assembly method is still the most popular. Besides the methods listed above, other researchers have built on the concepts used in Gibson assembly and other assembly methods to develop new assembly strategies like the Modular Overlap-Directed Assembly with Linkers (MODAL) strategy, or the Biopart Assembly Standard for Idempotent Cloning (BASIC) method.
Gibson assembly
The Gibson assembly method is a relatively straightforward DNA assembly method, requiring only a few additional reagents: the 5' T5 exonuclease, Phusion DNA polymerase, and Taq DNA ligase. The DNA fragments to be assembled are synthesised to have overlapping 5' and 3' ends in the order that they are to be assembled in. These reagents are mixed together with the DNA fragments to be assembled at 50 °C and the following reactions occur:
The T5 exonuclease chews back DNA from the 5' end of each fragment, exposing 3' overhangs on each DNA fragment.
The complementary overhangs on adjacent DNA fragments anneal via complementary base pairing.
The Phusion DNA polymerase fills in any gaps where the fragments anneal.
Taq DNA ligase repairs the nicks on both DNA strands.
Because the T5 exonuclease is heat labile, it is inactivated at 50 °C after the initial chew back step. The product is thus stable, and the fragments assembled in the desired order. This one-pot protocol can assemble up to 5 different fragments accurately, while several commercial providers have kits to accurately assemble up to 15 different fragments in a two-step reaction. However, while the Gibson assembly protocol is fast and uses relatively few reagents, it requires bespoke DNA synthesis as each fragment has to be designed to contain overlapping sequences with the adjacent fragments and amplified via PCR. This reliance on PCR may also affect the fidelity of the reaction when long fragments, fragments with high GC content or repeat sequences are used.
MODAL
The MODAL strategy defines overlap sequences known as "linkers" to reduce the amount of customisation that needs to be done with each DNA fragment. The linkers were designed using the R2oDNA Designer software and the overlap regions were designed to be 45 bp long to be compatible with Gibson assembly and other overlap assembly methods. To attach these linkers to the parts to be assembled, PCR is carried using part-specific primers containing 15 bp prefix and suffix adaptor sequences. The linkers are then attached to the adaptor sequences via a second PCR reaction. To position the DNA fragments, the same linker will be attached to the suffix of the desired upstream fragment and the prefix of the desired downstream fragments. Once the linkers are attached, Gibson assembly, CPEC, or the other overlap assembly methods can all be used to assemble the DNA fragments in the desired order.
BASIC
The BASIC assembly strategy was developed in 2015 and sought to address the limitations of previous assembly techniques, incorporating six key concepts from them: standard reusable parts; single-tier format (all parts are in the same format and are assembled using the same process); idempotent cloning; parallel (multipart) DNA assembly; size independence; automatability.
DNA parts and linker design
The DNA parts are designed and cloned into storage plasmids, with the part flanked by an integrated prefix (iP) and an integrated suffix (iS) sequence. The iP and iS sequences contain inward facing BsaI restriction sites, which contain overhangs complementary to the BASIC linkers. Like in MODAL, the 7 standard linkers used in BASIC were designed with the R2oDNA Designer software, and screened to ensure that they do not contain sequences with homology to chassis genomes, and that they do not contain unwanted sequences like secondary structure sequences, restriction sites or ribosomal binding sites. Each linker sequence is split into two halves, each with a 4 bp overhang complementary to the BsaI restriction site, a 12 bp double stranded sequence and sharing a 21 bp overlap sequence with the other half. The half that is will bind to the upstream DNA part is known as the suffix linker part (e.g. L1S) and the half that binds to the downstream part is known as the prefix linker part (e.g. L1P). These linkers form the basis of assembling the DNA parts together.
Besides directing the order of assembly, the standard BASIC linkers can also be modified to carry out other functions. To allow for idempotent assembly, linkers were also designed with additional methylated iP and iS sequences inserted to protect them from being recognised by BsaI. This methylation is lost following transformation and in vivo plasmid replication, and the plasmids can be extracted, purified, and used for further reactions.
Because the linker sequence are relatively long (45bp for a standard linker), there is an opportunity to incorporate functional DNA sequences to reduce the number of DNA parts needed during assembly. The BASIC assembly standard provides several linkers embedded with RBS of different strengths. Similarly to facilitate the construction of fusion proteins containing multiple protein domains, several fusion linkers were also designed to allow for full read-through of the DNA construct. These fusion linkers code for a 15 amino acid glycine and serine polypeptide, which is an ideal linker peptide for fusion proteins with multiple domains.
Assembly
There are three main steps in the assembly of the final construct.
First, the DNA parts are excised from the storage plasmid, giving a DNA fragment with BsaI overhangs on the 3' and 5' end.
Next, each linker part is attached to its respective DNA part by incubating with T4 DNA ligase. Each DNA part will have a suffix and prefix linker part from two different linkers to direct the order of assembly. For example, the first part in the sequence will have L1P and L2S, while the second part will have L2P and L3S attached. The linker parts can be changed to change the sequence of assembly.
Finally, the parts with the attached linkers are assembled into a plasmid by incubating at 50 °C. The 21 bp overhangs of the P and S linkers anneal and the final construct can be transformed into bacteria cells for cloning. The single stranded nicks are repaired in vivo following transformation, producing a stable final construct cloned into plasmids.
Applications
As DNA printing and DNA assembly methods have allowed commercial gene synthesis to become progressively and exponentially cheaper over the past years, artificial gene synthesis represents a powerful and flexible engineering tool for creating and designing new DNA sequences and protein functions. Besides synthetic biology, various research areas like those involving heterologous gene expression, vaccine development, gene therapy and molecular engineering, would benefit greatly from having fast and cheap methods to synthesise DNA to code for proteins and peptides. The methods used for DNA printing and assembly have even enabled the use of DNA as an information storage medium.
Synthesising bacterial genomes
Synthia and Mycoplasma laboratorium
On June 28, 2007, a team at the J. Craig Venter Institute published an article in Science Express, saying that they had successfully transplanted the natural DNA from a Mycoplasma mycoides bacterium into a Mycoplasma capricolum cell, creating a bacterium which behaved like a M. mycoides.
On Oct 6, 2007, Craig Venter announced in an interview with UK's The Guardian newspaper that the same team had synthesized a modified version of the single chromosome of Mycoplasma genitalium artificially. The chromosome was modified to eliminate all genes which tests in live bacteria had shown to be unnecessary. The next planned step in this minimal genome project is to transplant the synthesized minimal genome into a bacterial cell with its old DNA removed; the resulting bacterium will be called Mycoplasma laboratorium. The next day the Canadian bioethics group, ETC Group issued a statement through their representative, Pat Mooney, saying Venter's "creation" was "a chassis on which you could build almost anything". The synthesized genome had not yet been transplanted into a working cell.
On May 21, 2010, Science reported that the Venter group had successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record, and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had its DNA removed. The "synthetic" bacterium was viable, i.e. capable of replicating billions of times. The team had originally planned to use the M. genitalium bacterium they had previously been working with, but switched to M. mycoides because the latter bacterium grows much faster, which translated into quicker experiments. Venter describes it as "the first species.... to have its parents be a computer". The transformed bacterium is dubbed "Synthia" by ETC. A Venter spokesperson has declined to confirm any breakthrough at the time of this writing.
Synthetic Yeast 2.0
As part of the Synthetic Yeast 2.0 project, various research groups around the world have participated in a project to synthesise synthetic yeast genomes, and through this process, optimise the genome of the model organism Saccharomyces cerevisiae. The Yeast 2.0 project applied various DNA assembly methods that have been discussed above, and in March 2014, Jef Boeke of the Langone Medical Centre at New York University, revealed that his team had synthesized chromosome III of S. cerevisiae. The procedure involved replacing the genes in the original chromosome with synthetic versions and the finished synthetic chromosome was then integrated into a yeast cell. It required designing and creating 273,871 base pairs of DNA – fewer than the 316,667 pairs in the original chromosome. In March 2017, the synthesis of 6 of the 16 chromosomes had been completed, with synthesis of the others still ongoing.
See also
DNA sequencing
Genetic modification
Protein engineering
Synthetic Gene Database
Notes
Chemical synthesis
Gene expression
Genetically modified organisms
Genetics techniques
Molecular genetics
Protein biosynthesis
Synthetic biology | Artificial gene synthesis | [
"Chemistry",
"Engineering",
"Biology"
] | 6,694 | [
"Genetics techniques",
"Synthetic biology",
"Protein biosynthesis",
"Biological engineering",
"Biochemistry",
"Genetically modified organisms",
"Gene expression",
"Genetic engineering",
"Bioinformatics",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"nan",
"Molecular biology"... |
11,913,417 | https://en.wikipedia.org/wiki/Representative%20elementary%20volume | In the theory of composite materials, the representative elementary volume (REV) (also called the representative volume element (RVE) or the unit cell) is the smallest volume over which a measurement can be made that will yield a value representative of the whole. In the case of periodic materials, one simply chooses a periodic unit cell (which, however, may be non-unique), but in random media, the situation is much more complicated. For volumes smaller than the RVE, a representative property cannot be defined and the continuum description of the material involves Statistical Volume Element (SVE) and random fields. The property of interest can include mechanical properties such as elastic moduli, hydrogeological properties, electromagnetic properties, thermal properties, and other averaged quantities that are used to describe physical systems.
Definition
Rodney Hill defined the RVE as a sample of a heterogeneous material that:
"is entirely typical of the whole mixture on average”, and
"contains a sufficient number of inclusions for the apparent properties to be independent of the surface values of traction and displacement, so long as these values are macroscopically uniform.”
In essence, statement (1) is about the material's statistics (i.e. spatially homogeneous and ergodic), while statement (2) is a pronouncement on the independence of effective constitutive response with respect to the applied boundary conditions.
Both of these are issues of mesoscale (L) of the domain of random microstructure over which smoothing (or homogenization) is being done relative to the microscale (d). As L/d goes to infinity, the RVE is obtained, while any finite mesoscale involves statistical scatter and, therefore, describes an SVE. With these considerations one obtains bounds on effective (macroscopic) response of elastic (non)linear and inelastic random microstructures. In general, the stronger the mismatch in material properties, or the stronger the departure from elastic behavior, the larger is the RVE. The finite-size scaling of elastic material properties from SVE to RVE can be grasped in compact forms with the help of scaling functions universally based on stretched exponentials. Considering that the SVE may be placed anywhere in the material domain, one arrives at a technique for characterization of continuum random fields.
Another definition of the RVE was proposed by Drugan and Willis:
"It is the smallest material volume element of the composite for which the usual spatially constant (overall modulus) macroscopic constitutive representation is a sufficiently accurate model to represent mean constitutive response."
The choice of RVE can be quite a complicated process. The existence of a RVE assumes that it is possible to replace a heterogeneous material with an equivalent homogeneous material. This assumption implies that the volume should be large enough to represent the microstructure without introducing non-existing macroscopic properties (such as anisotropy in a macroscopically isotropic material). On the other hand, the sample should be small enough to be analyzed analytically or numerically.
Examples
RVEs for mechanical properties
In continuum mechanics generally for a heterogeneous material, RVE can be considered as a volume V that represents a composite statistically, i.e., volume that effectively includes a sampling of all microstructural heterogeneities (grains, inclusions, voids, fibers, etc.) that occur in the composite. It must however remain small enough to be considered as a volume element of continuum mechanics. Several types of boundary conditions can be prescribed on V to impose a given mean strain or mean stress to the material element.
One of the tools available to calculate the elastic properties of an RVE is the use of the open-source EasyPBC ABAQUS plugin tool.
Analytical or numerical micromechanical analysis of fiber reinforced composites involves the study of a representative volume element (RVE). Although fibers are distributed randomly in real composites, many micromechanical models assume periodic arrangement of fibers from which RVE can be isolated in a straightforward manner. The RVE has the same elastic constants and fiber volume fraction as the composite. In general RVE can be considered same as a differential element with a large number of crystals.
RVEs for porous media
Establishing a given porous medium's properties requires measuring samples of the porous medium. If the sample is too small, the readings tend to oscillate. With increasing sample size, the oscillations begin to dampen out. Eventually the sample size will become large enough that readings are consistent. This sample size is referred to as the representative elementary volume.
If sample size is increased further, measurement will remain stable until the sample size gets large enough that it begins to include other hydrostratigraphic layers. This is referred to as the maximum elementary volume (MEV).
Groundwater flow equation has to be defined in an REV.
RVEs for electromagnetic media
While RVEs for electromagnetic media can have the same form as those for elastic or porous media, the fact that mechanical strength and stability are not concerns allow for a wide range of RVEs. In the adjacent figure, the RVE consists of a split-ring resonator and its surrounding backing material.
Alternatives for RVE
There does not exist one RVE size and depending on the studied mechanical properties, the RVE size can vary significantly. The concept of statistical volume element (SVE) and uncorrelated volume element (UVE) have been introduced as alternatives for RVE.
Statistical Volume Element (SVE)
Statistical volume element (SVE), which is also referred to as stochastic volume element in finite element analysis, takes into account the variability in the microstructure. Unlike RVE in which average value is assumed for all realizations, SVE can have a different value from one realization to another. SVE models have been developed to study polycrystalline microstructures. Grain features, including orientation, misorientation, grain size, grain shape, grain aspect ratio are considered in SVE model. SVE model was applied in the material characterization and damage prediction in microscale. Compared with RVE, SVE can provide a comprehensive representation of the microstructure of materials.
Uncorrelated Volume Element (UVE)
Uncorrelated volume element (UVE) is an extension of SVE which also considers the co-variance of adjacent microstructure to present an accurate length scale for stochastic modelling.
References
Bibliography
.
Volume
Hydrogeology
Continuum mechanics | Representative elementary volume | [
"Physics",
"Mathematics",
"Environmental_science"
] | 1,362 | [
"Scalar physical quantities",
"Hydrology",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Classical mechanics",
"Size",
"Extensive quantities",
"Volume",
"Wikipedia categories named after physical quantities",
"Hydrogeology"
] |
11,913,475 | https://en.wikipedia.org/wiki/Rob%20Blokzijl | Robert "Rob" Blokzijl (21 October 1943 – 1 December 2015) was a Dutch physicist and computer scientist at the National Institute for Subatomic Physics (NIKHEF), and an early internet pioneer. He was founding member and chairman of RIPE, the Réseaux IP Européens (French translation of: Europeans IP Networks), the European Internet Registrar organisation.
Life and work
Born in Amsterdam, Blokzijl graduated from the University of Amsterdam in 1970, and received a doctorate in experimental physics from the same university in 1977.
Blokzijl had been active in building networks for the particle physics community in Europe. He was founding member and chairman of NIKHEF, the National Institute for Nuclear and High energy physics in the Netherlands. At the Réseaux IP Européens (RIPE), the European open forum for IP networking, he was spokesperson at its foundation in 1989 and later chaired this forum. He was also instrumental in the creation of the Réseaux IP Européens Network Coordination Centre (RIPE NCC) in 1992 as the first regional Internet registry (RIR) in the world. In 1999 he also was selected for the ICANN Board by the Address Supporting Organization, where he served until December 2002. In 2013 Blokzijl announced his resignation as chairman of RIPE, as per RIPE 68, after being in this position for 25 years. He appointed Hans Petter Holen as his successor.
In 2010 Blokzijl was awarded Officer in the Order of Orange-Nassau. He received this Royal Honour from Lodewijk Asscher, the Acting Mayor of Amsterdam. At the 93rd IETF meeting in 2015, Blokzijl was awarded the ISOC Jonathan B. Postel Service Award.
He died on 1 December 2015, aged 72.
On 1 December 2016, the RIPE NCC established the Rob Blokzijl Foundation to honour Rob's legacy by recognising and rewarding individuals who make substantial contributions to the development of the internet in the RIPE NCC service region.
References
External links
ICANNWiki entry on Robert Blokzijl
VIDEO of Postel Award announcement 23 July 2015
1943 births
2015 deaths
20th-century Dutch physicists
Dutch computer scientists
Particle physicists
University of Amsterdam alumni
Officers of the Order of Orange-Nassau
Scientists from Amsterdam
21st-century Dutch physicists | Rob Blokzijl | [
"Physics"
] | 479 | [
"Particle physicists",
"Particle physics"
] |
11,914,292 | https://en.wikipedia.org/wiki/Prix%20Michel-Sarrazin | The Prix Michel-Sarrazin is awarded annually in the Canadian province of Quebec by the Club de Recherches Clinique du Québec to a celebrated Québécois scientist who, by their dynamism and productivity, have contributed in an important way to the advancement of research biomedical. It is named in honour of Michel Sarrazin (1659–1734) who was the first Canadian scientist.
Winners
Source: CRCQ
1977 – Michel Chrétien
1978 – Jean-Marie Delage
1979 – Guy Lemieux
1980 – Charles Philippe Leblond
1981 – René Simard
1982 – Louis Poirier
1983 – André Barbeau
1984 – Jacques R. Ducharme
1985 – André Lanthier
1986 – Claude Fortier
1987 – Domenico Regoli
1988 – Charles Scriver
1989 – Serge Carrière
1990 – Fernand Labrie
1991 – Étienne LeBel
1992 – Réginald Nadeau
1993 – Claude C. Roy
1994 – Jacques Leblanc
1995 – Clarke Fraser
1996 – Jacques Genest
1997 – Samuel Solomon
1998 – Jacques de Champlain
1999 – Claude Laberge
2000 – Martial G. Bourassa
2001 – Jean Davignon
2002 – Brenda Milner
2003 – Peter T. Macklem
2004 – Francis Glorieux
2005 – Pavel Hamet
2006 – Marek Rola-Pleszczynski
2007 – Rémi Quirion
2008 – Serge Rossignol
2009 – Jacques P. Tremblay
2010 – Michel Bouvier
2011 – Stanley Nattel
2012 – Michel L. Tremblay
2013 – Vassilios Papadopoulos
2014 – Roger Lecomte
2015 – Claude Perreault
2016 – Michel G. Bergeron
2017 – Anne-Marie Mes-Masson
2018 – William D. Fraser
See also
List of biochemistry awards
List of biomedical science awards
List of awards named after people
References
Prix Michel-Sarrazin
Canadian science and technology awards
Awards established in 1977
Biochemistry awards
Biomedical awards | Prix Michel-Sarrazin | [
"Chemistry",
"Biology"
] | 385 | [
"Biochemistry",
"Biochemistry awards"
] |
11,914,839 | https://en.wikipedia.org/wiki/Liquid-mirror%20space%20telescope | A liquid-mirror space telescope is a concept for a reflecting space telescope that uses a reflecting liquid such as mercury as its primary reflector.
Design
There are several designs for such a telescope:
Twirled pail: A pair of objects, one the mirror assembly and the other a counterweight possibly containing a camera assembly, are spun up to induce centripetal acceleration on the surface of the mirror assembly.
Half toroid: A hollow torus is spun up to maintain centripetal acceleration against the inside wall. The camera assembly sits in the center. The torus width is arbitrarily large. Optional other pieces include a large flat mirror in the center to allow randomly orienting the mirror without frequently changing the axis of spin.
Balloon: A balloon with a reflective liquid on the inside is spun up and deforms itself into a parabolic shape. A flat mirror on the inside reflects light to the concave surface.
Continuous-acceleration rocket. A spacecraft is accelerated by an ion thruster or something similar, which produces a constant linear acceleration for a long period of time. The spacecraft carries a rotating liquid mirror, the axis of which is parallel with the direction of acceleration. The spacecraft can be accelerated in any direction, so the mirror can be aimed in any direction.
Regardless of the specific configuration, such a telescope would be similar to an Earth-based liquid-mirror telescope. However, instead of relying on Earth's gravity to maintain the necessary parabolic shape of the rotating mercury mirror, it relies on artificial gravity instead.
Other possibilities for inducing a parabolic shape in the reflecting liquid include:
magnetic fields on a viscous and partially magnetic liquid;
internal pressures or surface tension effects on a reflective liquid;
creating the telescope while the reflective surface is liquid, but depending on cooling effects to solidify the surface and then using that as the telescope main mirror.
The concept is seen as an enabler of very large optical space telescopes, as a liquid mirror would be much cheaper to construct than a conventional glass mirror of comparable performance.
History
In April 2022, NASA reported that they would conduct the Fluidic Telescope Experiment (FLUTE) in the ISS, which would be part of the Axiom Mission 1 astronaut Eytan Stibbe's research portfolio. The research would test liquid lens by using water injected by polymers in microgravity through utilizing buoyancy to even gravitational forces and cause weightlessness, to be later hardened by UV light or temperature in-orbit.
References
External links
Sci-Astro discussion thread
Technical Paper describing the propulsive acceleration method
Telescope types
Space telescopes | Liquid-mirror space telescope | [
"Astronomy"
] | 523 | [
"Space telescopes"
] |
11,915,031 | https://en.wikipedia.org/wiki/Plantago%20maritima | Plantago maritima, the sea plantain, seaside plantain or goose tongue, is a species of flowering plant in the plantain family Plantaginaceae. It has a subcosmopolitan distribution in temperate and Arctic regions, native to most of Europe, northwest Africa, northern and central Asia, northern North America, and southern South America.
Description
It is a herbaceous perennial plant with a dense rosette of leaves without petioles. Each leaf is linear, 2–22 cm long and under 1 cm broad, thick and fleshy-textured, with an acute apex and a smooth or distantly toothed margin; there are three to five veins. The flowers are small, greenish-brown with brown stamens, produced in a dense spike 0.5–10 cm long on top of a stem 3–20 cm tall.
Subspecies
There are four subspecies:
Plantago maritima subsp. maritima. Europe, Asia, northwest Africa.
Plantago maritima subsp. borealis (Lange) A. Blytt and O. Dahl. Arctic regions. All parts of the plant small, compared to temperate plants.
Plantago maritima subsp. juncoides (Lam.) Hultén. South America, North America (this name to North American plants has been questioned).
Plantago maritima subsp. serpentina (All.) Arcang. Central Europe, on serpentine soils in mountains.
Ecology and physiology
In much of the range it is strictly coastal, growing on sandy soils. In some areas, it also occurs in alpine habitats, along mountain streams. Some of the physiology and metabolism of this species has been described, of particular note is how the metabolism of this species is altered with elevated atmospheric carbon dioxide concentrations.
Uses
Like samphires, the leaves of the plant are harvested to be eaten raw or cooked. The seeds are also eaten raw or cooked, and can be ground into flour.
References
External links
maritima
Edible plants
Flora of Europe
Flora of North Africa
Flora of Northern America
Flora of southern South America
Flora of Western Asia
Halophytes
Plants described in 1753
Taxa named by Carl Linnaeus | Plantago maritima | [
"Chemistry"
] | 439 | [
"Halophytes",
"Salts"
] |
11,915,675 | https://en.wikipedia.org/wiki/Earth%20mover%27s%20distance | In computer science, the earth mover's distance (EMD) is a measure of dissimilarity between two frequency distributions, densities, or measures, over a metric space D.
Informally, if the distributions are interpreted as two different ways of piling up earth (dirt) over D, the EMD captures the minimum cost of building the smaller pile using dirt taken from the larger, where cost is defined as the amount of dirt moved multiplied by the distance over which it is moved.
Over probability distributions, the earth mover's distance is also known as the Wasserstein metric , Kantorovich–Rubinstein metric, or Mallows's distance. It is the solution of the optimal transport problem, which in turn is also known as the Monge-Kantorovich problem, or sometimes the Hitchcock–Koopmans transportation problem; when the measures are uniform over a set of discrete elements, the same optimization problem is known as minimum weight bipartite matching.
Formal definitions
The EMD between probability distributions and can be defined as an infimum over joint probabilities:
where is the set of all joint distributions whose marginals are and .
By Kantorovich-Rubinstein duality, this can also be expressed as:
where the supremum is taken over all 1-Lipschitz continuous functions, i.e. .
EMD between signatures
In some applications, it is convenient to represent a distribution as a signature, or a collection of clusters, where the -th cluster represents a feature of mass centered at .
In this formulation, consider signatures and . Let be the ground distance between clusters and . Then the EMD between and is given by the optimal flow , with the flow between and , that minimizes the overall cost.
subject to the constraints:
The optimal flow is found by solving this linear optimization problem. The earth mover's distance is defined as the work normalized by the total flow:
Variants and extensions
Unequal probability mass
Some applications may require the comparison of distributions with different total masses. One approach is to allow for partial matching, where dirt from the more massive distribution is rearranged to make the less massive, and any leftover "dirt" is discarded at no cost.
Formally, let be the total weight of , and be the total weight of . We have:
where is the set of all measures whose projections are and .
Note that this generalization of EMD is not a true distance between distributions, as it does not satisfy the triangle inequality.
An alternative approach is to allow for mass to be created or destroyed, on a global or local level, as an alternative to transportation, but with a cost penalty. In that case one must specify a real parameter , the ratio between the cost of creating or destroying one unit of "dirt", and the cost of transporting it by a unit distance. This is equivalent to minimizing the sum of the earth moving cost plus times the L1 distance between the rearranged pile and the second distribution. The resulting measure is a true distance function.
More than two distributions
The EMD can be extended naturally to the case where more than two distributions are compared. In this case, the "distance" between the many distributions is defined as the optimal value of a linear program. This generalized EMD may be computed exactly using a greedy algorithm, and the resulting functional has been shown to be Minkowski additive and convex monotone.
Computing the EMD
The EMD can be computed by solving an instance of transportation problem, using any algorithm for minimum-cost flow problem, e.g. the network simplex algorithm.
The Hungarian algorithm can be used to get the solution if the domain D is the set {0, 1}. If the domain is integral, it can be translated for the same algorithm by representing integral bins as multiple binary bins.
As a special case, if D is a one-dimensional array of "bins" of length n, the EMD can be efficiently computed by scanning the array and keeping track of how much dirt needs to be transported between consecutive bins. Here the bins are zero-indexed:
EMD-based similarity analysis
EMD-based similarity analysis (EMDSA) is an important and effective tool in many multimedia information retrieval and pattern recognition applications. However, the computational cost of EMD is super-cubic to the number of the "bins" given an arbitrary "D". Efficient and scalable EMD computation techniques for large scale data have been investigated using MapReduce, as well as bulk synchronous parallel and resilient distributed dataset.
Applications
An early application of the EMD in computer science was to compare two grayscale images that may differ due to dithering, blurring, or local deformations. In this case, the region is the image's domain, and the total amount of light (or ink) is the "dirt" to be rearranged.
The EMD is widely used in content-based image retrieval to compute distances between the color histograms of two digital images. In this case, the region is the RGB color cube, and each image pixel is a parcel of "dirt". The same technique can be used for any other quantitative pixel attribute, such as luminance, gradient, apparent motion in a video frame, etc..
More generally, the EMD is used in pattern recognition to compare generic summaries or surrogates of data records called signatures. A typical signature consists of list of pairs ((x1,m1), ... (xn,mn)), where each xi is a certain "feature" (e.g., color in an image, letter in a text, etc.), and mi is "mass" (how many times that feature occurs in the record). Alternatively, xi may be the centroid of a data cluster, and mi the number of entities in that cluster. To compare two such signatures with the EMD, one must define a distance between features, which is interpreted as the cost of turning a unit mass of one feature into a unit mass of the other. The EMD between two signatures is then the minimum cost of turning one of them into the other.
EMD analysis has been used for quantitating multivariate changes in biomarkers measured by flow cytometry, with potential applications to other technologies that report distributions of measurements.
History
The concept was first introduced by Gaspard Monge in 1781, in the context of transportation theory. The use of the EMD as a distance measure for monochromatic images was described in 1989 by S. Peleg, M. Werman and H. Rom. The name "earth mover's distance" was proposed by J. Stolfi in 1994, and was used in print in 1998 by Y. Rubner, C. Tomasi and L. G. Guibas.
See also
Monge–Ampère equation
References
External links
C code for the Earth Mover's Distance (archived here)
Python implementation with references
Python2 wrapper for the C implementation of the Earth Mover's Distance
C++ and Matlab and Java wrappers code for the Earth Mover's Distance, especially efficient for thresholded ground distances
Java implementation of a generic generator for evaluating large-scale Earth Mover's Distance based similarity analysis
Demonstration of Minkowski additivity, convex monotonicity, and other properties of the Earth Movers distance
Statistical distance | Earth mover's distance | [
"Physics"
] | 1,531 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
11,915,821 | https://en.wikipedia.org/wiki/DirectLOGIC | DirectLOGIC is a range of programmable logic controllers produced by Koyo.
They are programmed using DirectSOFT via:
RS-232
USB port with USB-to-Serial adapter
10BASE-T or 10/100 Ethernet network card
Models
DL05 Micro PLC
DL06 Micro Modular PLC
DL105 Fixed I/O (brick) PLC
DL205 Modular PLC
DL305 Legacy PLC, compatible with the General Electric Series One, the Texas Instruments Series 305, and the Siemens SIMATIC TI305.
DL405 Specialty PLC
See also
SCADA
External links
Koyo PLCs
Industrial automation
Japanese brands | DirectLOGIC | [
"Engineering"
] | 125 | [
"Industrial automation",
"Automation",
"Industrial engineering"
] |
11,916,004 | https://en.wikipedia.org/wiki/Transsulfuration%20pathway | The transsulfuration pathway is a metabolic pathway involving the interconversion of cysteine and homocysteine through the intermediate cystathionine. Two transsulfurylation pathways are known: the forward and the reverse.
The forward pathway is present in several bacteria, such as Escherichia coli and Bacillus subtilis, and involves the transfer of the thiol group from cysteine to homocysteine (methionine precursor with the S-methyl group), thanks to the γ-replacement of the acetyl or succinyl group of a homoserine with cysteine via its thiol group to form cystathionine (catalysed by cystathionine γ-synthase, which is encoded by metB in E. coli and metI in B. subtilis). Cystathionine is then cleaved by means of the β-elimination of the homocysteine portion of the molecule leaving behind an unstable imino acid, which is attacked by water to form pyruvate and ammonia (catalysed by the metC-encoded cystathionine β-lyase).
The production of homocysteine through transsulfuration allows the conversion of this intermediate to methionine, through a methylation reaction carried out by methionine synthase.
The reverse pathway is present in several organisms, including humans, and involves the transfer of the thiol group from homocysteine to cysteine via a similar mechanism. In Klebsiella pneumoniae the cystathionine β-synthase is encoded by mtcB, while the γ-lyase is encoded by mtcC.
Humans are auxotrophic for methionine, hence it is called an "essential amino acid" by nutritionists, but are not for cysteine due to the reverse trans-sulfurylation pathway. Mutations in this pathway lead to a disease known as homocystinuria, due to homocysteine accumulation.
Role of pyridoxal phosphate
All four transsulfuration enzymes require vitamin B6 in its active form (pyridoxal phosphate or PLP). Three of these enzymes (cystathionine γ-synthase excluded) are part of the Cys/Met metabolism PLP-dependent enzyme family (type I PLP enzymes).
Direct sulfurization
The direct sulfurylation pathways for the synthesis of cysteine or homocysteine proceeds via the replacement of the acetyl/succinyl group with free sulfide (via the cysK or cysM -encoded cysteine synthase. and the metZ or metY -encoded homocysteine synthase,
References
Nitrogen cycle
Sulfur metabolism
Metabolic pathways | Transsulfuration pathway | [
"Chemistry"
] | 582 | [
"Metabolic pathways",
"Sulfur metabolism",
"Nitrogen cycle",
"Metabolism"
] |
14,594,241 | https://en.wikipedia.org/wiki/OGLE-TR-211 | OGLE-TR-211 is a magnitude 15 star located about 6,000 light years away in the constellation of Carina.
Planetary system
OGLE-TR-211 has a transiting planet in a very close orbit, another hot Jupiter.
See also
OGLE-TR-182
List of extrasolar planets
References
External links
F-type stars
Planetary transit variables
Carina (constellation)
Planetary systems with one confirmed planet | OGLE-TR-211 | [
"Astronomy"
] | 85 | [
"Carina (constellation)",
"Constellations"
] |
14,595,181 | https://en.wikipedia.org/wiki/HD%20109749 | HD 109749 is a binary star system about 206 light years away in the constellation of Centaurus. The pair have a combined apparent visual magnitude of 8.08, which is too faint to be visible to the naked eye. The primary component has a close orbiting exoplanet companion. The system is drifting closer with a heliocentric radial velocity of −13.2 km/s.
The primary component, HD 109749 A, is a G-type subgiant star with a spectral type of G3IV, indicating it is an evolved star with a luminosity higher than that of a main sequence star. It has a mass of and a radius of . The star is shining with a luminosity of and has an effective temperature of 5,860 K. Evolutionary models estimate an age of 4.1 billion years. HD 109749 A is chromospherically inactive and has a high metallicity, with an iron abundance 178% of Sun's.
The secondary, HD 109749 B, is a K-type main sequence star with an apparent magnitude of 10.3. It has a mass of about and is located at a separation of 8.4 arcseconds, which corresponds to a projected separation of 490 AU. This star has the same proper motion as the primary and seems to be at the same distance, confirming they form a physical binary system.
Planetary system
In 2005, an exoplanet was discovered around HD 109749 A. It was detected by the radial velocity method as part of the N2K Consortium. It is a hot Jupiter with a minimum mass of and a semimajor axis of 0.06 AU.
See also
HD 149143
List of extrasolar planets
References
G-type subgiants
K-type main-sequence stars
Binary stars
Planetary systems with one confirmed planet
Centaurus
Durchmusterung objects
109749
61595
J12371639-4048435 | HD 109749 | [
"Astronomy"
] | 404 | [
"Centaurus",
"Constellations"
] |
14,595,486 | https://en.wikipedia.org/wiki/HD%20111232 | HD 111232 is a star in the southern constellation of Musca. It is too faint to be visible with the naked eye, having an apparent visual magnitude of 7.59. The distance to this star is 94.5 light years based on parallax. It is drifting away from the Sun with a radial velocity of +104 km/s, having come to within some 264,700 years ago. The absolute magnitude of this star is 5.25, indicating it would have been visible to the naked eye at that time.
This is an ancient, thick disk population II star with an estimated age of twelve billion years. It is a G-type main-sequence star with a stellar classification of G8 V Fe-1.0, indicating an anomalous underabundance of iron in the stellar atmosphere. The star has 80% of the mass of the Sun and 88% of the Sun's radius. It is spinning slowly with a projected rotational velocity of 0.4 km/s. X-ray emission has not been detected, suggesting a low level of coronal activity. The star is radiating 70% of the luminosity of the Sun from its photosphere at an effective temperature of 5,648 K.
Planetary system
A superjovian planetary companion was detected by the CORALIE team, based on observations beginning in 2003. Planets around such metal-poor stars are rare (the only two known similar cases as of 2019 are HD 22781 and HD 181720). An astrometric measurement of the planet's inclination and true mass was published in 2022 as part of Gaia DR3. Later in 2022, these parameters were revised along with the detection of a second substellar companion, likely a brown dwarf.
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Musca
Durchmusterung objects
111232
062534
J12485177-6825304 | HD 111232 | [
"Astronomy"
] | 398 | [
"Musca",
"Constellations"
] |
14,596,042 | https://en.wikipedia.org/wiki/HD%20117207 | HD 117207 is a star in the southern constellation Centaurus. With an apparent visual magnitude of 7.24, it is too dim to be visible to the naked eye but can be seen with a small telescope. Based upon parallax measurements, it is located at a distance of from the Sun. The star is drifting closer with a radial velocity of −17.4 km/s. It has an absolute magnitude of 4.67.
This object has a stellar classification of G7IV-V, showing blended spectral traits of a G-type main-sequence star and an older, evolving subgiant star. It is around four billion years old with 5% greater mass than the Sun and a 7% larger radius. The star is radiating 1.16 times the luminosity of the Sun from its photosphere at an effective temperature of 5,644 K.
In 2005, a planet was found orbiting the star using the radial velocity method, and was designated HD 117207 b. The orbital elements of this planet were refined in 2018, showing an orbital period of , a semimajor axis of , and an eccentricity of 0.16. The minimum mass of this object is nearly double that of Jupiter. If an inner planet is orbiting the star, it must have an orbital period no greater than to satisfy Hill's criteria for dynamic stability. In 2023, the inclination and true mass of HD 117207 b were determined via astrometry.
See also
HD 117618
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Centaurus
CD-34 08913
117207
065808 | HD 117207 | [
"Astronomy"
] | 342 | [
"Centaurus",
"Constellations"
] |
14,596,158 | https://en.wikipedia.org/wiki/HD%20118203 | HD 118203 is a star with an orbiting exoplanet located in the northern circumpolar constellation of Ursa Major. It has the proper name Liesma, which means flame, and it is the name of a character from the Latvian poem Staburags un Liesma (Staburags and Liesma). The name was selected in the NameExoWorlds campaign by Latvia, during the 100th anniversary of the IAU.
The apparent visual magnitude of HD 118203 is 8.06, which means it is invisible to the naked eye but it can be seen using binoculars or a telescope. Based on parallax measurements, it is located at a distance of 300 light years from the Sun. The star is drifting closer with a radial velocity of −29 km/s. Based on its position and space velocity this is most likely (97% chance) an older thin disk star. An exoplanet has been detected in a close orbit around the star.
The spectrum of HD 118203 matches a G-type main-sequence star with a class of G0V. It has a low level of chromospheric activity, which means a low level of radial velocity jitter for planet detection purposes. The star has 1.23 times the mass of the Sun and double the Sun's radius. It is around 5.4 billion years old and is spinning with a projected rotational velocity of 7.0 km/s. HD 118203 is radiating 3.8 times the luminosity of the Sun from its photosphere at an effective temperature of 5,741 K.
Planetary system
In 2006, a hot Jupiter, HD 118203 b, was reported in an eccentric orbit around this star. It was discovered using the radial velocity method based on observation of high-metallicity stars begun in 2004. In 2020, it was found that this is a transiting planet, which allowed the mass and radius of the body to be determined. This exoplanet has more than double the mass of Jupiter and is 13% greater in radius. The fact that the parent star is among the brighter known planet hosts (as of 2020) makes it an interesting object for further study. This planet received the proper name Staburags in the 2019 NameExoWorlds campaign.
In 2024, the star HD 118203 was found to display variability with a period matching that of planet b's orbit, suggesting magnetic interaction between the star and planet.
Also in 2024, a second massive planet was discovered using radial velocity observations as well as Hipparcos and Gaia astrometry. HD 118203 c is about 11 times the mass of Jupiter and takes 14 years to complete an orbit around the star. Like planet b, the orbit of planet c is close to edge-on, suggesting an aligned planetary system. The presence of any additional transiting planets at least twice the size of Earth and with periods less than 100 days was ruled out by the observations.
See also
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with two confirmed planets
Ursa Major
BD+54 1609
118203
066192
1271
Liesma | HD 118203 | [
"Astronomy"
] | 653 | [
"Ursa Major",
"Constellations"
] |
14,596,160 | https://en.wikipedia.org/wiki/Continuous%20cooling%20transformation | A continuous cooling transformation (CCT) phase diagram is often used when heat treating steel. These diagrams are used to represent which types of phase changes will occur in a material as it is cooled at different rates. These diagrams are often more useful than time-temperature-transformation diagrams because it is more convenient to cool materials at a certain rate (temperature-variable cooling), than to cool quickly and hold at a certain temperature (isothermal cooling).
Types of continuous cooling diagrams
There are two types of continuous cooling diagrams drawn for practical purposes.
Type 1: This is the plot beginning with the transformation start point, cooling with a specific transformation fraction and ending with a transformation finish temperature for all products against transformation time for each cooling curve.
Type 2: This is the plot beginning with the transformation start point, cooling with specific transformation fraction and ending with a transformation finish temperature for all products against cooling rate or bar diameter of the specimen for each type of cooling medium..
See also
Isothermal transformation
Phase diagram
References
Diagrams
Phase transitions
Metallurgy | Continuous cooling transformation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 208 | [
"Physical phenomena",
"Phase transitions",
"Mechanical engineering stubs",
"Metallurgy",
"Phases of matter",
"Critical phenomena",
"Materials science",
"nan",
"Mechanical engineering",
"Statistical mechanics",
"Matter"
] |
14,596,247 | https://en.wikipedia.org/wiki/HD%20122430 | HD 122430 is single star in the equatorial constellation of Hydra. It has an orange hue and is faintly visible to the naked eye with an apparent visual magnitude of 5.47. The star is located at a distance of 105.6 light years from the Sun based on parallax.
This is an aging giant star with a stellar classification of K2–3III. It has completely run out of the hydrogen fuel that keeps it stable, although it is only two billion years old, younger than the Sun's 4.6 billion years. HD 122430 has a mass of 1.6 times and radius of 22.9 times that of the Sun. Despite its younger age, it has slightly lower metallicity, approximately 90%. It is radiating 190 times the luminosity of the Sun from its photosphere at an effective temperature of 4300 K.
A candidate exoplanet was reported orbiting the star via the radial velocity method at a conference in 2003, and designated HD 122430 b. It has an orbital period of and an eccentricity of 0.68. However, a follow-up study by Soto et al. (2015) failed to detect a signal, so it remains unconfirmed.
See also
HD 47536
List of extrasolar planets
References
K-type giants
Hydra (constellation)
Durchmusterung objects
122430
068581
5265 | HD 122430 | [
"Astronomy"
] | 286 | [
"Hydra (constellation)",
"Constellations"
] |
14,596,496 | https://en.wikipedia.org/wiki/Photosynthetic%20reaction%20centre%20protein%20family | Photosynthetic reaction centre proteins are main protein components of photosynthetic reaction centres (RCs) of bacteria and plants. They are transmembrane proteins embedded in the chloroplast thylakoid or bacterial cell membrane.
Plants, algae, and cyanobacteria have one type of PRC for each of its two photosystems. Non-oxygenic bacteria, on the other hand, have an RC resembling either the Photosystem I centre (Type I) or the Photosystem II centre (Type II). In either case, PRCs have two related proteins (L/M; D1/D2; PsaA/PsaB) making up a quasi-symmetrical 5-helical core complex with pockets for pigment binding. The two types are structurally related and share a common ancestor. Each type have different pockets for ligands to accommodate their specific reactions: while Type I RCs use iron sulfur clusters to accept electrons, Type II RCs use quinones. The centre units of Type I RCs also have six extra transmembrane helices for gathering energy.
In bacteria
The Type II photosynthetic apparatus in non-oxygenic bacteria consists of light-harvesting protein-pigment complexes LH1 and LH2, which use carotenoid and bacteriochlorophyll as primary donors. LH1 acts as the energy collection hub, temporarily storing it before its transfer to the photosynthetic reaction centre (RC). Electrons are transferred from the primary donor via an intermediate acceptor (bacteriophaeophytin) to the primary acceptor (quinone Qa), and finally to the secondary acceptor (quinone Qb), resulting in the formation of ubiquinol QbH2. RC uses the excitation energy to shuffle electrons across the membrane, transferring them via ubiquinol to the cytochrome bc1 complex in order to establish a proton gradient across the membrane, which is used by ATP synthetase to form ATP.
The core complex is anchored in the cell membrane, consisting of one unit of RC surrounded by LH1; in some species there may be additional subunits. A type II RC consists of three subunits: L (light), M (medium), and H (heavy; ). Subunits L and M provide the scaffolding for the chromophore, while subunit H contains a cytoplasmic domain. In Rhodopseudomonas viridis, there is also a non-membranous tetrahaem cytochrome (4Hcyt) subunit on the periplasmic surface.
The structure for a type I system in the anaerobe Heliobacterium modesticaldum was resolved in 2017 (). As a homodimer consisting of only one type of protein in the core complex, it is considered a closer example to what an ancestral unit before the Type I/II split is like compared to all heterodimeric systems.
Oxygenic systems
The D1 (PsbA) and D2 (PsbD) photosystem II (PSII) reaction centre proteins from cyanobacteria, algae and plants only show approximately 15% sequence homology with the L and M subunits, however the conserved amino acids correspond to the binding sites of the photochemically active cofactors. As a result, the reaction centres (RCs) of purple photosynthetic bacteria and PSII display considerable structural similarity in terms of cofactor organisation.
The D1 and D2 proteins occur as a heterodimer that form the reaction core of PSII, a multisubunit protein-pigment complex containing over forty different cofactors, which are anchored in the cell membrane in cyanobacteria, and in the thylakoid membrane in algae and plants. Upon absorption of light energy, the D1/D2 heterodimer undergoes charge separation, and the electrons are transferred from the primary donor (chlorophyll a) via phaeophytin to the primary acceptor quinone Qa, then to the secondary acceptor Qb, which like the bacterial system, culminates in the production of ATP. However, PSII has an additional function over the bacterial system. At the oxidising side of PSII, a redox-active residue in the D1 protein reduces P680, the oxidised tyrosine then withdrawing electrons from a manganese cluster, which in turn withdraw electrons from water, leading to the splitting of water and the formation of molecular oxygen. PSII thus provides a source of electrons that can be used by photosystem I to produce the reducing power (NADPH) required to convert CO2 to glucose.
Instead of assigning specialized roles to quinones, the PsaA-PsaB photosystem I centre evolved to make both quinones immobile. It also recruited the iron-sulphur PsaC subunit to further mitigate the risk of oxidative stress.
In viruses
Photosynthetic reaction centre genes from PSII (PsbA, PsbD) have been discovered within marine bacteriophage. Though it is widely accepted dogma that arbitrary pieces of DNA can be borne by phage between hosts (transduction), one would hardly expect to find transduced DNA within a large number of viruses. Transduction is presumed to be common in general, but for any single piece of DNA to be routinely transduced would be highly unexpected. Instead, conceptually, a gene routinely found in surveys of viral DNA would have to be a functional element of the virus itself (this does not imply that the gene would not be transferred among hosts - which the photosystem within viruses is - but instead that there is a viral function for the gene, that it is not merely hitchhiking with the virus). However, free viruses lack the machinery needed to support metabolism, let alone photosynthesis. As a result, photosystem genes are not likely to be a functional component of the virus like a capsid protein or tail fibre. Instead, it is expressed within an infected host cell. Most virus genes that are expressed in the host context are useful for hijacking the host machinery to produce viruses or for replication of the viral genome. These can include reverse transcriptases, integrases, nucleases or other enzymes. Photosystem components do not fit this mould either.
The production of an active photosystem during viral infection provides active photosynthesis to dying cells. This is not viral altruism towards the host, however. The problem with viral infections tends to be that they disable the host relatively rapidly. As protein expression is shunted from the host genome to the viral genome, the photosystem degrades relatively rapidly (due in part to the interaction with light, which is highly corrosive), cutting off the supply of nutrients to the replicating virus. A solution to this problem is to add rapidly degraded photosystem genes to the virus, such that the nutrient flow is uninhibited and more viruses are produced. One would expect that this discovery will lead to other discoveries of a similar nature; that elements of the host metabolism key to viral production and easily damaged during infection are actively replaced or supported by the virus during infection. Indeed, recently, PSI gene cassettes containing whole gene suites [(psaJF, C, A, B, K, E and D) and (psaD, C, A and B)] were also reported to exist in marine cyanophages from the Pacific and Indian Oceans
Subfamilies
Photosynthetic reaction centre, M subunit
Photosystem II reaction centre protein PsbA/D1
Photosystem II reaction centre protein PsbD/D2
Photosynthetic reaction centre, L subunit
See also
C-terminal processing peptidase, also known as photosystem II D1 protein processing peptidase
Notes
References
Protein domains
Protein families
Transmembrane proteins | Photosynthetic reaction centre protein family | [
"Biology"
] | 1,663 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,596,562 | https://en.wikipedia.org/wiki/Nyctinasty | In plant biology, nyctinasty is the circadian rhythm-based nastic movement of higher plants in response to the onset of darkness, or a plant "sleeping". Nyctinastic movements are associated with diurnal light and temperature changes and controlled by the circadian clock. It has been argued that for plants that display foliar nyctinasty, it is a crucial mechanism for survival; however, most plants do not exhibit any nyctinastic movements. Nyctinasty is found in a range of plant species and across xeric, mesic, and aquatic environments, suggesting that this singular behavior may serve a variety of evolutionary benefits. Examples are the closing of the petals of a flower at dusk and the sleep movements of the leaves of many legumes.
Physiology
Plants use phytochrome to detect red and far red light. Depending on which kind of light is absorbed, the protein can switch between a Pr state that absorbs red light and a Pfr state that absorbs far red light. Red light converts Pr to Pfr and far red light converts Pfr to Pr. Many plants use phytochrome to establish circadian cycles which influence the opening and closing of leaves associated with nyctastic movements. Anatomically, the movements are mediated by pulvini. Pulvinus cells are located at the base or apex of the petiole and the flux of water from the dorsal to ventral motor cells regulates leaf closure. This flux is in response to movement of potassium ions between pulvinus and surrounding tissue. Movement of potassium ions is connected to the concentration of Pfr or Pr. In Albizia julibrissin, longer darker periods, leading to low Pfr, result in a faster leaf opening. In the SLEEPLESS mutation of Lotus japonicus, the pulvini are changed into petiole-like structures, rendering the plant incapable of closing its leaflets at night. Non-pulvinar mediated movement is also possible and happens through differential cell division and growth on either side of the petiole, resulting in a bending motion within the leaves to the desired position.
Leaf movement is also controlled by bioactive substances known as leaf opening or leaf closing factors. Several leaf-opening and leaf-closing factors have been characterized biochemically. These factors differ among plants. Leaf closure and opening is mediated by the relative concentrations of leaf opening and closing factors in a plant. Either the leaf opening or closing factor is a glycoside, which is inactivated by hydrolysis of the glycosidic bond via beta glucosidase. In Lespedeza cuneata the leaf opening factor, potassium lespedezate, is hydrolyzed to 4 hydroxy phenyl pyruvic acid. In Phyllanthus urinaria, leaf closing factor Phyllanthurinolactone is hydrolyzed to its aglycon during the day. Beta glucosidase activity is regulated via circadian rhythms.
Fluorescence studies have shown that the binding sites of leaf opening and closing factors are located on the surface of the motor cell. Shrinking and expansion of the motor cell in response to this chemical signal allows for leaf opening and closure. The binding of leaf opening and closing factors is specific to related plants. The leaf movement factor of Chamaecrista mimosoides (formerly Cassia mimosoides) was found to not bind to the motor cell of Albizia julibrissin. The leaf movement factor of Albizia julibrissin similarly didn't bind to the motor cell of Chamaecrista mimosoides, but did bind to Albizia saman and Albizia lebbeck.
Function
The functions of nyctinastic movement have yet to be conclusively identified, although several have been proposed. Minorsky hypothesized that nyctinastic behaviors are adaptive due to the plant being able to reduce its surface area during night time, which can lead to better temperature retention and also reduces night-time herbivory. Minorsky specifically suggests a Tritrophic Hypothesis in which he considers the predators of herbivores in addition to the plants and herbivores themselves. By moving leaves up or down, herbivores become more visible to nocturnal predators in both a spatial and olfactory sense, increasing herbivore predation and subsequently decreasing damage to a plant's leaves. Studies using mutant plants with a loss of function gene that results in petiole growth instead of pulvini found that these plants have less biomass and smaller leaf area than the wild type. This indicates nyctinastic movement may be beneficial toward plant growth.
Charles Darwin believed that nyctinasty exists to reduce the risk of plants freezing.
Nyctinasty may occur to protect the pollen, keeping pollen dry and intact during the nighttime when most pollinating insects are inactive. Conversely, some flowers that are pollinated by moths or bats exhibit nyctinastic flower opening at night.
History
The earliest recorded observation of this behavior in plants dates back to 324 BC when Androsthenes of Thasos, a companion to Alexander the Great, noted the opening and closing of tamarind tree leaves from day to night. Carl Linnaeus (1729) proposed that this was the plants sleeping, but this idea has been widely contested.
References
External links
Plant physiology | Nyctinasty | [
"Biology"
] | 1,090 | [
"Plant physiology",
"Plants"
] |
14,596,677 | https://en.wikipedia.org/wiki/Plastocyanin%20family%20of%20copper-binding%20proteins | Plastocyanin/azurin family of copper-binding proteins (or blue (type 1) copper domain) is a family of small proteins that bind a single copper atom and that are characterised by an intense electronic absorption band near 600 nm (see copper proteins). The most well-known members of this class of proteins are the plant chloroplastic plastocyanins, which exchange electrons with cytochrome c6, and the distantly related bacterial azurins, which exchange electrons with cytochrome c551. This family of proteins also includes amicyanin from bacteria such as Methylobacterium extorquens or Paracoccus versutus (Thiobacillus versutus) that can grow on methylamine; auracyanins A and B from Chloroflexus aurantiacus; blue copper protein from Alcaligenes faecalis; cupredoxin (CPC) from Cucumis sativus (Cucumber) peelings; cusacyanin (basic blue protein; plantacyanin, CBP) from cucumber; halocyanin from Natronomonas pharaonis (Natronobacterium pharaonis), a membrane-associated copper-binding protein; pseudoazurin from Pseudomonas; rusticyanin from Thiobacillus ferrooxidans; stellacyanin from Rhus vernicifera (Japanese lacquer tree); umecyanin from the roots of Armoracia rusticana (Horseradish); and allergen Ra3 from ragweed. This pollen protein has evolutary relation to the above proteins, but seems to have lost the ability to bind copper. Although there is an appreciable amount of divergence in the sequences of all these proteins, the copper ligand sites are conserved.
References
Protein domains
Peripheral membrane proteins
Copper proteins | Plastocyanin family of copper-binding proteins | [
"Biology"
] | 400 | [
"Protein domains",
"Protein classification"
] |
14,596,980 | https://en.wikipedia.org/wiki/General%20bacterial%20porin%20family | General bacterial porins are a family of porin proteins from the outer membranes of Gram-negative bacteria. The porins act as molecular filters for hydrophilic compounds. They are responsible for the 'molecular sieve' properties of the outer membrane. Porins form large water-filled channels which allow the diffusion of hydrophilic molecules into the periplasmic space. Some porins form general diffusion channels that allow any solute up to a certain size (that size is known as the exclusion limit) to cross the membrane, while other porins are specific for one particular solute and contain a binding site for that solute inside the pores (these are known as selective porins). As porins are the major outer membrane proteins, they also serve as receptor sites for the binding of phages and bacteriocins.
General diffusion porins usually assemble as a trimer in the membrane, and the transmembrane core of these proteins is composed exclusively of beta strands. It has been shown that a number of porins are evolutionarily related.
Structure of Porins
Porins are composed of β-strands, which are, in general, linked together by beta turns on the periplasmic side of the outer membrane and long loops on the external side of the membrane. The β strands lie in an antiparallel fashion and form a cylindrical tube, called a β-barrel[2]. The amino acid composition of the porin β-strands are unique in that polar and non-polar residues alternate along them. This means that the non-polar residues face outwards so as to interact with the non-polar lipid membrane, whereas the polar residues face inwards into the center of the β-barrel to form the aqueous channel. The phospholipids that comprise the outer membrane give it the same semi-permeable characteristics as the cytoplasmic membrane
The porin channel is partially blocked by a loop, called the eyelet, which projects into the cavity. In general, it is found between strands 5 and 6 of each barrel, and it defines the size of solute that can traverse the channel. It is lined almost exclusively with charged amino acyl residues arranged on opposite sides of the channel, creating a transversal electric field across the pore. The eyelet has a local surplus of negative charges from four glutamic acid and seven aspartic acid residues (in contrast to one histidine, two lysine and three arginine residues) is partially compensated for by two bound calcium atoms, and this asymmetric arrangement of molecules is thought to have an influence in the selection of molecules that can pass through the channel[3].
Some osmoporins, such as OmpC, forms a complex with alpha-helical transmembrane protein MlaA to maintain outer membrane lipid asymmetry.
Homologous Families
Three dimensional structural analyses show that there are many (at-least 48) other families which share sufficient sequence similarity to the General Bacterial Porin(GBP) family. are homologous in structure and function to General bacterial porin family. One such family is The Sugar Porin (SP) Family. (TC# 1.B.3) The SP family includes the well characterized maltoporin of E. coli for which the three-dimensional structures with and without its substrate have been obtained by X-ray diffraction. The protein consists of an 18 β-stranded β-barrel in contrast to proteins of the general bacterial porin family (GBP) and the Rhodobacter PorCa Porin (RPP) family(TC# 1.B.7)) which consist of 16 β-stranded β-barrels. Although maltoporin contains a wider beta-barrel than the porins of the GBP (TC# 1.B.1) and RPP families(TC# 1.B.7), it exhibits a narrower channel, showing only 5% of the ionic conductance of the latter porins.
The Rhodobacter PorCa Protein, the only well characterized member of the RPP family, was the first porin to yield its three-dimensional structure by X-ray crystallography. It has a 16-stranded β-barrel structure similar to that of the members of the GBP (TC #1.B.1) family. Paupit et al. (1991) presented crystal structures of phosphoporin (PhoE; TC# 1.B.1.1.2), maltoporin (LamB; TC# 1.B.3.1.1) and Matrixporin (OmpF), all of E. coli, and found these have 3-d folds similar to that of the Rhodobacter porin, PorCa. Structural and sequence analysis provide firm evidence that the GBP, SP and RPP families together with 44 additional families in TCDB belong to a single superfamily. However, we have been able to demonstrate homology between members of families GBP and RPP using statistical means (M. Saier, unpublished results).
Porin Superfamilies
General bacterial porin family belongs to Porin Superfamily I. The homologous families Sugar Porin(SP) family and Rhodobacter PorCa Porin (RPP) Family also belong to the Porin Superfamily I.
Subfamilies
Porin, Neisseria sp. type
References
Protein domains
Outer membrane proteins | General bacterial porin family | [
"Biology"
] | 1,131 | [
"Protein domains",
"Protein classification"
] |
14,597,135 | https://en.wikipedia.org/wiki/Outer%20membrane%20receptor | Outer membrane receptors, also known as TonB-dependent receptors, are a family of beta barrel proteins named for their localization in the outer membrane of gram-negative bacteria. TonB complexes sense signals from the outside of bacterial cells and transmit them into the cytoplasm, leading to transcriptional activation of target genes.
TonB-dependent receptors in gram-negative bacteria are associated with the uptake and transport of large substrates such as iron siderophore complexes and vitamin B12.
TonB interactions with other proteins
In Escherichia coli, the TonB protein interacts with outer membrane receptor proteins that carry out high-affinity binding and energy-dependent uptake of specific substrates into the periplasmic space. These substrates are either poorly transported through non-specific porin channels or are encountered at very low concentrations. In the absence of TonB, these receptors bind their substrates but do not carry out active transport. TonB-dependent regulatory systems consist of six protein protein components.
The proteins that are currently known or presumed to interact with TonB include BtuB, CirA, FatA, FcuT, FecA, FhuA, FhuE, FepA, FptA, HemR, IrgA, IutA, PfeA, PupA, LbpA and TbpA. The TonB protein also interacts with some colicins. Most of these proteins contain a short conserved region at their N-terminus.
TonB-dependent receptor plug domain
TonB-dependent receptors include a plug domain, an independently folding subunit that acts as the channel gate, blocking the pore until the channel is bound by ligand. At this point it undergoes conformational changes, opening the channel.
TonB as phage receptor
TonB also acts as a receptor for Salmonella bacteriophage H8. In fact, H8 infection is TonB dependent.
References
Protein domains
Protein families
Outer membrane proteins | Outer membrane receptor | [
"Biology"
] | 390 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,597,216 | https://en.wikipedia.org/wiki/Lolium%20arundinaceum | Lolium arundinaceum, tall fescue is a cool-season perennial C3 species of grass that is native to Europe and introduced to California. It occurs on woodland margins, in grassland and in coastal marshes. It is also an important forage grass with many cultivars that used in agriculture and is used as an ornamental grass in gardens, and sometimes as a phytoremediation plant.
Most publications have used the names Festuca arundinacea or, more recently, Schedonorus arundinaceus for this species, but DNA studies appear to have settled a long debate that it should be included within the genus Lolium instead.
Description
Tall fescue is a long-lived tuft-forming perennial with erect to spreading hollow flowering stems up to about 165 cm (5'6") tall (exceptionally up to 200 cm) which are hairless (glabrous), including the leaf sheaths, but with a short (1.5 mm) ligule and slightly hairy (ciliate) pointed auricles that can wrap slightly around the stem. The leaf blade is flat, up to about 10 mm wide, and also glabrous, but rough on both sides and the margins. The tillers (non-flowering stems) are typically shorter but otherwise similar to the culms. The leaves have prominent veins running parallel the entire length of the blade. Emerging leaves are rolled in the bud. Note that most grasses are folded not rolled, which make this a key identification feature on tall fescue.
Flowering typically occurs from early June until late August, with an erect to slightly nodding open panicle up to about 40 cm (1'6") long. The branches are normally in pairs, each of which has 3-18 spikelets, which are 9-15 mm long and comprise 4-8 bisexual florets and two short, unequal glumes. The lower glume has only 1 nerve whereas the upper one has 3. The lemmas typically have a short (3 mm) awn arising just below the tip. Each floret has 3 stamens with anthers about 3-4 mm long. The fruit is a nut or caryopsis with the seed tightly enclosed by the hardened lemma and palea.
Taxonomy
Tall fescue was first described (as Festuca arundinacea) by the German naturalist Johann Christian Daniel von Schreber in 1771. Its inclusion within the genus Festuca was due to the similarity of the flowers and inflorescences. However, there has been much debate since 1898 about its relationship to the genus Lolium, largely because of hybridization with Lolium perenne (species in separate genera are far less likely to form hybrids than those within the same genus). Recent DNA studies have shown that it should indeed be considered a ryegrass (Lolium) rather than a fescue (Festuca) because these species are more closely related to each other, despite the fact that ryegrasses have inflorescences of spikes rather than racemes.
Its chromosome number is 2n = 42.
Distribution and status
Tall fescues cultivated species has become a common sight in California grasslands and habitats, such as the California coastal prairie plant community, since its introduction this species has been topic of debate.
Habitat and ecology
In its native European environment, tall fescue is found in damp grasslands, river banks, and coastal area. The British National Vegetation Classification lists it as a minor component in a range of grassland types, but it is particularly characteristic of its own MG12 Festuca arundinacea community, which is a tussocky type of pasture that occurs in brackish grazing marshes around the south and west coasts. This vegetation type is also home to some uncommon plants such as parsley water-dropwort and slender spike-rush. Tall fescue is also found in a number of salt marsh and maritime cliff communities. In New Zealand, where it is introduced, the species is particularly prolific in salt marshes, where it is often dominant.
Its Ellenberg values in Britain are L = 8, F = 6, R = 7, N = 6, and S = 1, which show that it favours damp, brightly sunny places with neutral soils and moderate fertility, and that it can occur in slightly brackish situations.
Endophyte association
Tall fescue can be found growing in most soils of the southeast including marginal, acidic, and poorly drained soils and in areas of low fertility, and where stresses occur due to drought and overgrazing. These beneficial attributes are now known to be a result of a symbiotic association with the fungus Neotyphodium coenophialum.
This association between tall fescue and the fungal endophyte is a mutualistic symbiotic relationship (both symbionts derive benefits from it). The fungus remains completely intercellular, growing between the cells of the aboveground parts of its grass host. The fungus is asexual, and is transmitted to new generations of tall fescue only through seed, a mode known as vertical transmission. Thus in nature, the fungus does not live outside the plant. Viability of the fungus in seeds is limited; typically, after a year or two of seed storage the fungal endophyte mycelium has died, and seeds germinated will result in plants that are endophyte-free.
The tall fescue–endophyte symbiosis confers a competitive advantage to the plant. Endophyte-infected tall fescue compared to endophyte-free tall fescue deters herbivory by insects and mammals, bestows drought resistance, and disease resistance. In return for shelter, seed transmission, and nutrients the endophyte produces secondary metabolites. These metabolites, namely alkaloids, are responsible for increased plant fitness. Alkaloids in endophytic tall fescue include 1-aminopyrrolizidines (lolines), ergot alkaloids (clavines, lysergic acids, and derivative alkaloids), and the pyrrolopyrazine, peramine.
The lolines are the most abundant alkaloids, with concentrations 1000 higher than those of ergot alkaloids. Endophyte-free grasses do not produce lolines, and, as shown for the closely related endophyte commonly occurring in meadow fescue, Neotyphodium uncinatum, the endophyte can produce lolines in axenic laboratory culture. However, although N. coenophialum possesses all the genes for loline biosynthesis, it does not produce lolines in culture. So in the tall fescue symbiosis, only the interaction of the host and endophyte produces the lolines. Lolines have been shown to deter insect herbivory, and may cause various other responses in higher organisms. Despite their lower concentrations, ergot alkaloids appear to significantly affect animal growth. Ergots cause changes in normal homeostatic mechanisms in animals that result in toxicity manifested through reduced weight gains, elevated core temperatures, restricted blood flow, reduced milk production and reproductive problems. Peramine, like the ergot alkaloids, is found in much lower concentrations in the host compared with loline alkaloids. Its activity has been shown to be primarily insecticidal, and has not been linked to toxicity in mammals or other herbivores.
Uses
Tall fescue was introduced into the United States in the late 19th century, but it did not establish itself as a widely used perennial forage until the 1940s. As in Europe, tall fescue has become an important, well-adapted cool season forage grass for agriculture in the US with many cultivars. In addition to forage, it has become an important grass for turf and soil conservation. Tall fescue is the most heat tolerant of the major cool season grasses. Tall fescue has a deep root system compared to other cool season grasses. This non-native grass is well adapted to the "transition zone" Mid Atlantic and Southeastern United States and now occupies over .
The dominant cultivar grown in the United States is Kentucky 31. In 1931 E. N. Fergus, a professor of agronomy at the University of Kentucky, collected seed from a population on a hillside in Menifee County, Kentucky although formal cultivar release did not happen until 1943. Fergus heard about this "wonder grass" while judging a sorghum syrup competition in a nearby town. He wanted to see this grass because it was green, lush, and growing well on a sloped hillside during a drought. While visiting the site he was impressed and took seed samples with him. With this seed he conducted variety trials, initiated seed increase nurseries, and lauded its performance. It was released as Kentucky 31 in 1943 and today it dominates grasslands in the humid southeastern US. In 1943, Fergus and others recognized this tall fescue cultivar as being vigorous, widely adaptable, able to withstand poor soil conditions, resistant to pests and drought. It is used primarily in pastures and low maintenance situations.
Breeders have created numerous cultivars that are dark green with desirable narrower blades than the light green coarse bladed K-31. Tall fescue is the grass on the South Lawn of the White House.
The predominant cultivar found in British pastures is S170.
Endophyte infected tall fescue effects on animals
Broodmares and foals
Horses are especially prone to reproductive problems associated with tall fescue, often resulting in death of the foal, mare, or both. Horses which are pregnant may be strongly affected by alkaloids produced by the tall fescue symbiont. Broodmares that forage on infected fescue may have prolonged gestation, foaling difficulty, thickened placenta, or impaired lactation. In addition, the foals may be born weakened or dead. To moderate toxicosis, it is recommended that pregnant mares should be taken off infected tall fescue pasture for 60–90 days before foaling as late gestation problems are most common.
Cattle
Fescue toxicity in cattle appears as roughening of the coat in the summer and intolerance to heat. Cattle that graze on tall fescue are more likely to stay in the shade or wade in the water in hot weather. In the winter, a condition known as "fescue foot" might afflict cattle. This results from vasoconstriction of the blood vessels especially in the extremities, and causes a gangrenous condition. Untreated, the hoof might slough off. Additionally, cattle may experience decreased weight gains and poor milk production when heavily grazing infected tall fescue pasture. To deter toxicosis cattle should be given alternative feed to dilute their infected tall fescue intake.
Nutrient pools under tall fescue pasture
Carbon cycling in terrestrial ecosystems is a major focus of research. Terrestrial carbon sequestration is the process of removing carbon dioxide from the atmosphere via photosynthesis and storing this carbon in either plant or soil carbon pools. Increases in soil organic carbon help aggregate the soil, increase infiltration, reduce erosion, increase soil fertility, and act as long lived pools of soil carbon. Many studies have suggested that long term endophyte-infected tall fescue plots increase soil carbon storage in the soil by limiting the microbial and macrofaunal activity to break down endophyte infected organic matter input and by increasing inputs of carbon via plant production. While the long term studies tend to show an increase in carbon storage, the short term studies do not. However, short term studies have shown that the endophyte association results in higher above- and belowground plant biomass production compared to uninfected plants, as well as a decrease in certain microbial communities. Site-specific characteristics, such as management and climate, need to be further understood to realize the ecological role and potential benefits of tall fescue and the endophyte association as it relates to carbon sequestration.
Novel endophytes
New cultivars are being bred and tested every year. A major focus of research is producing endophyte-infected tall fescue cultivars that have no detrimental effects to livestock while keeping the endophytic effects of reduced insect herbivory, disease resistance, drought tolerance, and extended growing season. Novel endophytes, also referred to as "friendly" endophytes, are symbiotic fungi that are associated with tall fescue, but do not produce target alkaloids in toxic concentrations. A widely used and tested novel endophyte is called MaxQ and is grown in the tall fescue grass host Georgia-Jesup. This cultivar of tall fescue-novel endophyte combination produces ergot alkaloids at near zero levels while maintaining the concentration of other alkaloids.
See also
Phytoremediation plants
Hyperaccumulators table – 3
Invasive grasses of North America
References
Fribourg, H. A., D. B. Hannaway, and C. P. West (ed.). Tall Fescue for the Twenty-first Century. 539 pp. Agron. Monog. 53. ASA, CSSA, SSSA. Madison, WI.
arundinacea
Bunchgrasses of Europe
Forages
Garden plants of Europe
Lawn grasses
Phytoremediation plants
Ornamental grass | Lolium arundinaceum | [
"Biology"
] | 2,802 | [
"Phytoremediation plants",
"Bioremediation"
] |
14,597,235 | https://en.wikipedia.org/wiki/Ordered%20geometry | Ordered geometry is a form of geometry featuring the concept of intermediacy (or "betweenness") but, like projective geometry, omitting the basic notion of measurement. Ordered geometry is a fundamental geometry forming a common framework for affine, Euclidean, absolute, and hyperbolic geometry (but not for projective geometry).
History
Moritz Pasch first defined a geometry without reference to measurement in 1882. His axioms were improved upon by Peano (1889), Hilbert (1899), and Veblen (1904). Euclid anticipated Pasch's approach in definition 4 of The Elements: "a straight line is a line which lies evenly with the points on itself".
Primitive concepts
The only primitive notions in ordered geometry are points A, B, C, ... and the ternary relation of intermediacy [ABC] which can be read as "B is between A and C".
Definitions
The segment AB is the set of points P such that [APB].
The interval AB is the segment AB and its end points A and B.
The ray A/B (read as "the ray from A away from B") is the set of points P such that [PAB].
The line AB is the interval AB and the two rays A/B and B/A. Points on the line AB are said to be collinear.
An angle consists of a point O (the vertex) and two non-collinear rays out from O (the sides).
A triangle is given by three non-collinear points (called vertices) and their three segments AB, BC, and CA.
If three points A, B, and C are non-collinear, then a plane ABC is the set of all points collinear with pairs of points on one or two of the sides of triangle ABC.
If four points A, B, C, and D are non-coplanar, then a space (3-space) ABCD is the set of all points collinear with pairs of points selected from any of the four faces (planar regions) of the tetrahedron ABCD.
Axioms of ordered geometry
There exist at least two points.
If A and B are distinct points, there exists a C such that [ABC].
If [ABC], then A and C are distinct (A ≠ C).
If [ABC], then [CBA] but not [CAB].
If C and D are distinct points on the line AB, then A is on the line CD.
If AB is a line, there is a point C not on the line AB.
(Axiom of Pasch) If ABC is a triangle and [BCD] and [CEA], then there exists a point F on the line DE for which [AFB].
Axiom of dimensionality:
For planar ordered geometry, all points are in one plane. Or
If ABC is a plane, then there exists a point D not in the plane ABC.
All points are in the same plane, space, etc. (depending on the dimension one chooses to work within).
(Dedekind's Axiom) For every partition of all the points on a line into two nonempty sets such that no point of either lies between two points of the other, there is a point of one set which lies between every other point of that set and every point of the other set.
These axioms are closely related to Hilbert's axioms of order. For a comprehensive survey of axiomatizations of ordered geometry see Pambuccian (2011).
Results
Sylvester's problem of collinear points
The Sylvester–Gallai theorem can be proven within ordered geometry.
Parallelism
Gauss, Bolyai, and Lobachevsky developed a notion of parallelism which can be expressed in ordered geometry.
Theorem (existence of parallelism): Given a point A and a line r, not through A, there exist exactly two limiting rays from A in the plane Ar which do not meet r. So there is a parallel line through A which does not meet r.
Theorem (transmissibility of parallelism): The parallelism of a ray and a line is preserved by adding or subtracting a segment from the beginning of a ray.
The transitivity of parallelism cannot be proven in ordered geometry. Therefore, the "ordered" concept of parallelism does not form an equivalence relation on lines.
See also
Incidence geometry
Euclidean geometry
Hilbert's axioms
Tarski's axioms
Affine geometry
Absolute geometry
Non-Euclidean geometry
Erlangen program
Cyclic order
Separation relation
References
Fields of geometry
Order theory | Ordered geometry | [
"Mathematics"
] | 947 | [
"Fields of geometry",
"Order theory",
"Geometry"
] |
14,597,289 | https://en.wikipedia.org/wiki/Pythagorean%20addition | In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides and , this length can be calculated as
where denotes the Pythagorean addition operation.
This operation can be used in the conversion of Cartesian coordinates to polar coordinates. It also provides a simple notation and terminology for some formulas when its summands are complicated; for example, the energy-momentum relation in physics becomes
It is implemented in many programming libraries as the hypot function, in a way designed to avoid errors arising due to limited-precision calculations performed on computers. In its applications to signal processing and propagation of measurement uncertainty, the same operation is also called addition in quadrature; it is related to the quadratic mean or "root mean square".
Applications
Pythagorean addition (and its implementation as the hypot function) is often used together with the atan2 function to convert from Cartesian coordinates to polar coordinates :
Repeated Pythagorean addition can find the diameter of a rectangular cuboid. This is the longest distance between two points, the length of the body diagonal of the cuboid. For a cuboid with side lengths , , and , this length is .
If measurements have independent errors respectively, the quadrature method gives the overall error,
whereas the upper limit of the overall error is
if the errors were not independent.
In signal processing, addition in quadrature is used to find the overall noise from independent sources of noise. For example, if an image sensor gives six digital numbers of shot noise, three of dark current noise and two of Johnson–Nyquist noise under a specific condition, the overall noise is
digital numbers, showing the dominance of larger sources of noise.
The root mean square of a finite set of numbers is times their Pythagorean sum. This is a generalized mean of the numbers.
Properties
The operation is associative and commutative, and
This means that the real numbers under form a commutative semigroup.
The real numbers under are not a group, because can never produce a negative number as its result, whereas each element of a group must be the result of applying the group operation to itself and the identity element. On the non-negative numbers, it is still not a group, because Pythagorean addition of one number by a second positive number can only increase the first number, so no positive number can have an inverse element. Instead, it forms a commutative monoid on the non-negative numbers, with zero as its identity.
Implementation
Hypot is a mathematical function defined to calculate the length of the hypotenuse of a right-angle triangle. It was designed to avoid errors arising due to limited-precision calculations performed on computers. Calculating the length of the hypotenuse of a triangle is possible using the square root function on the sum of two squares, but hypot avoids problems that occur when squaring very large or very small numbers. If calculated using the natural formula,
the squares of very large or small values of and may exceed the range of machine precision when calculated on a computer, leading to an inaccurate result caused by arithmetic underflow and overflow. The hypot function was designed to calculate the result without causing this problem.
If either input to hypot is infinite, the result is infinite. Because this is true for all possible values of the other input, the IEEE 754 floating-point standard requires that this remains true even when the other input is not a number (NaN).
Since C++17, there has been an additional hypot function for 3D calculations:
Calculation order
The difficulty with the naive implementation is that may overflow or underflow, unless the intermediate result is computed with extended precision. A common implementation technique is to exchange the values, if necessary, so that , and then to use the equivalent form
The computation of cannot overflow unless both and are zero. If underflows, the final result is equal to , which is correct within the precision of the calculation. The square root is computed of a value between 1 and 2. Finally, the multiplication by cannot underflow, and overflows only when the result is too large to represent. This implementation has the downside that it requires an additional floating-point division, which can double the cost of the naive implementation, as multiplication and addition are typically far faster than division and square root. Typically, the implementation is slower by a factor of 2.5 to 3.
More complex implementations avoid this by dividing the inputs into more cases:
When is much larger than , , to within machine precision.
When overflows, multiply both and by a small scaling factor (e.g. 2−64 for IEEE single precision), use the naive algorithm which will now not overflow, and multiply the result by the (large) inverse (e.g. 264).
When underflows, scale as above but reverse the scaling factors to scale up the intermediate values.
Otherwise, the naive algorithm is safe to use.
However, this implementation is extremely slow when it causes incorrect jump predictions due to different cases. Additional techniques allow the result to be computed more accurately, e.g. to less than one ulp.
Programming language support
Pythagorean addition function is present as the hypot function in many programming languages and libraries, including
CSS,
C++11,
D,
Fortran (since Fortran 2008),
Go,
JavaScript (since ES2015),
Julia,
Java (since version 1.5),
Kotlin,
MATLAB,
PHP,
Python,
Ruby,
Rust,
and Scala.
Metafont has Pythagorean addition and subtraction as built-in operations, under the names ++ and +-+ respectively.
See also
Euclidean distance
Alpha max plus beta min algorithm
References
Further reading
.
Operations on numbers
Addition
Numerical analysis | Pythagorean addition | [
"Mathematics"
] | 1,243 | [
"Euclidean plane geometry",
"Mathematical objects",
"Computational mathematics",
"Equations",
"Arithmetic",
"Mathematical relations",
"Pythagorean theorem",
"Numerical analysis",
"Planes (geometry)",
"Operations on numbers",
"Approximations"
] |
14,597,919 | https://en.wikipedia.org/wiki/High%20Energy%20Astronomy%20Observatory%203 | The last of NASA's three High Energy Astronomy Observatories, HEAO 3 was launched 20 September 1979 on an Atlas-Centaur launch vehicle, into a nearly circular, 43.6 degree inclination low Earth orbit with an initial perigeum of 486.4 km.
The normal operating mode was a continuous celestial scan, spinning approximately once every 20 min about the spacecraft z-axis, which was nominally pointed at the Sun.
Total mass of the observatory at launch was .
HEAO 3 included three scientific instruments: the first a cryogenic high-resolution germanium gamma-ray spectrometer, and two devoted to cosmic-ray observations.
The scientific objectives of the mission's three experiments were:
(1) to study intensity, spectrum, and time behavior of X-ray and gamma-ray sources between 0.06 and 10 MeV; measure isotropy of the diffuse X-ray and gamma-ray background; and perform an exploratory search for X-and gamma-ray line emissions;
(2) to determine the isotopic composition of the most abundant components of the cosmic-ray flux with atomic mass between 7 and 56, and the flux of each element with atomic number (Z) between Z = 4 and Z = 50;
(3) to search for super-heavy nuclei up to Z = 120 and measure the composition of the nuclei with Z >20.
The Gamma-ray Line Spectrometer Experiment
The HEAO "C-1" instrument (as it was known before launch) was a sky-survey experiment, operating in the hard X-ray and low-energy gamma-ray bands.
The gamma-ray spectrometer was especially designed to search for the 511 keV gamma-ray line produced by the annihilation of positrons in stars, galaxies, and the interstellar medium (ISM), nuclear gamma-ray line emission expected from the interactions of cosmic rays in the ISM, the radioactive products of cosmic nucleosynthesis, and nuclear reactions due to low-energy cosmic rays.
In addition, careful study was made of the spectral and time variations of known hard X-ray sources.
The experimental package contained four cooled, p-type high-purity Ge gamma-ray detectors with a total volume of about 100 cm, enclosed in a thick (6.6 cm average) caesium iodide (CsI) scintillation shield in active anti-coincidence to suppress extraneous background.
The experiment was capable of measuring gamma-ray energies falling within the energy interval from 0.045 to 10 MeV. The Ge detector system had an initial energy resolution better than 2.5 keV at 1.33 MeV and a line sensitivity from 1.E-4 to 1.E-5 photons/cm2-s, depending on the energy. Key experimental parameters were (1) a geometry factor of 11.1 cm2-sr, (2) effective area ~75 cm at 100 keV, (3) a field of view of ~30 deg FWHM at 45 keV, and (4) a time resolution of less than 0.1 ms for the germanium detectors and 10 s for the CsI detectors. The gamma-ray spectrometer operated until 1 June 1980, when its cryogen was exhausted. The energy resolution of the Ge detectors was subject to degradation (roughly proportional to energy and time) due to radiation damage. The primary data are available at from the NASA HESARC and at JPL. They include instrument, orbit, and aspect data plus some spacecraft housekeeping information on 1600-bpi binary tapes. Some of this material has subsequently been archived on more modern media. The experiment was proposed, developed, and managed by the Jet Propulsion Laboratory of the California Institute of Technology, under the direction of Dr. Allan S. Jacobson.
The Isotopic Composition of Primary Cosmic Rays Experiment
The HEAO C-2 experiment measured the relative composition of the isotopes of the primary cosmic rays between beryllium and iron (Z from 4 to 26) and the elemental abundances up to tin (Z=50). Cerenkov counters and hodoscopes, together with the Earth's magnetic field, formed a spectrometer. They determined charge and mass of cosmic rays to a precision of 10% for the most abundant elements over the momentum range from 2 to 25 GeV/c (c=speed of light). Scientific direction was by Principal Investigators Prof. Bernard Peters and Dr. Lyoie Koch-Miramond. The primary data base has been archived at the Centre Etudes Nuclearires de Saclay and the Danish Space Research Institute. Information on the data products is given by Engelman et al. 1985.
The Heavy Nuclei Experiment
The purpose of the HEAO C-3 experiment was to measure the charge spectrum of cosmic-ray nuclei over the nuclear charge (Z) range from 17 to 120, in the energy interval 0.3 to 10 GeV/nucleon; to characterize cosmic ray sources; processes of nucleosynthesis, and propagation modes. The detector consisted of a double-ended instrument of upper and lower hodoscopes and three dual-gap ion chambers. The two ends were separated by a Cerenkov radiator. The geometrical factor was 4 cm2-sr. The ion chambers could resolve charge to 0.24 charge units at low energy and 0.39 charge units at high energy and high Z. The Cerenkov counter could resolve 0.3 to 0.4 charge units. Binns et al. give more details.
The experiment was proposed and managed by the Space Radiation Laboratory of the California Institute of Technology (Caltech), under the direction of Principal Investigator Prof. Edward C. Stone, Jr. of Caltech, and Dr. Martin H. Israel, and Dr. Cecil J. Waddington.
Project
The HEAO 3 Project was the final mission in the High Energy Astronomy Observatory series, which was managed by the NASA Marshall Space Flight Center (MSFC), where the project scientist was Dr. Thomas A. Parnell, and the project manager was Dr. John F. Stone. The prime contractor was TRW.
See also
HEAO Program
High Energy Astronomy Observatory 1
Einstein Observatory (HEAO 2)
References
1979 in spaceflight
Gamma-ray telescopes
Space telescopes
X-ray telescopes
Spacecraft launched in 1979 | High Energy Astronomy Observatory 3 | [
"Astronomy"
] | 1,310 | [
"Space telescopes"
] |
14,597,968 | https://en.wikipedia.org/wiki/Sound%20on%20tape | SOT is an acronym for the phrase sound on tape. It refers to any audio recorded on analog or digital video formats. It is used in scriptwriting for television productions and filmmaking to indicate the portions of the production that will use room tone or other audio from the time of recording, as opposed to audio recorded later (studio voice-over, Foley, etc.).
In broadcast journalism, SOT is generally considered to be audio captured from an individual who is on camera, like an interviewee and may also be referred to as a soundbite.
See also
Filmmaking
MOS (filmmaking)
Sound-on-film
External links
United States Department of State
http://www.nvm.org.au/General%20Articles.htm#WHAT_IS_NAT_SOT
References
Audio engineering
Television terminology | Sound on tape | [
"Engineering"
] | 170 | [
"Electrical engineering",
"Audio engineering"
] |
14,598,117 | https://en.wikipedia.org/wiki/Northern%20riverine%20forest | The northern riverine forest is a type of forest ecology most dominant along waterways in the northeastern and north-central United States and bordering areas of Canada. Key species include willow, elm, American sycamore, painted trillium, goldthread, common wood-sorrel, pink lady's-slipper, wild sarsaparilla, and cottonwood.
One of the distinct ecosystems is the Riverine Forest. These are found on the lower flood plains along the rivers edge. The main species found here is one of the deciduous species; the Balsam Poplar. These trees like a high volume of moisture and are able to tolerate flooding. They are distinguishable by their thick, gnarly bark and their larger, pointed leaves. These leaves have a distinct drip tip. The trees supply homes for the many native species of fauna.
Other Key trees include yellow birch, white birch, sugar maple, American beech, eastern hemlock, white pine, red pine, northern red oak, pin cherry, and red spruce.
Key shrubs include striped maple and hobblebush.
References
Kricher, John. A Field Guide to Eastern Forests. Houghton-Mifflin, Boston, 1998.
Sherwin, Brooke. Wealselhead Society Calgary, Alberta. 2010
Ecology | Northern riverine forest | [
"Biology"
] | 262 | [
"Ecology"
] |
14,598,196 | https://en.wikipedia.org/wiki/Mathematics%20of%20Operations%20Research | Mathematics of Operations Research is a quarterly peer-reviewed scientific journal established in February 1976. It focuses on areas of mathematics relevant to the field of operations research such as continuous optimization, discrete optimization, game theory, machine learning, simulation methodology, and stochastic models. The journal is published by INFORMS (Institute for Operations Research and the Management Sciences). the journal has a 2017 impact factor of 1.078.
History
The journal was established in 1976. The founding editor-in-chief was Arthur F. Veinott Jr. (Stanford University). He served until 1980, when the position was taken over by Stephen M. Robinson, who held the position until 1986. Erhan Cinlar served from 1987 to 1992, and was followed by Jan Karel Lenstra (1993-1998). Next was Gérard Cornuéjols (1999-2003) and Nimrod Megiddo (2004-2009). Finally came Uri Rothblum (2009-2012), Jim Dai (2012-2018), and the current editor-in-chief Katya Scheinberg (2019–present).
The journal's three initial sections were game theory, stochastic systems, and mathematical programming. Currently, the journal has four sections: continuous optimization, discrete optimization, stochastic models, and game theory.
Notable papers
The following papers have been cited most frequently:
Roger B. Myerson, "Optimal Auction Design", vol 6:1, 58-73
A. Ben-Tal and Arkadi Nemirovski, "Robust Convex Optimization", vol 23:4, 769-805
M. R. Garey, D. S. Johnson, and Ravi Sethi, "The Complexity of Flowshop and Jobshop Scheduling", vol 1:2, 117-129
References
External links
Academic journals established in 1976
Media related to game theory
Operations research
Systems journals
Mathematics journals
INFORMS academic journals
Operations research journals | Mathematics of Operations Research | [
"Mathematics"
] | 393 | [
"Applied mathematics",
"Game theory",
"Operations research",
"Media related to game theory"
] |
14,598,720 | https://en.wikipedia.org/wiki/Nutating%20disc%20engine | A nutating disc engine (also sometimes called a disc engine) is an internal combustion engine comprising fundamentally of one moving part and a direct drive onto the crankshaft. Initially patented in 1993, it differs from earlier internal combustion engines in a number of ways and uses a circular rocking or wobbling nutating motion, drawing heavily from similar steam-powered engines developed in the 19th century, and similar to the motion of the non-rotating portion of a swash plate on a swash plate engine.
Operation
In its basic configuration the core of the engine is a nutating non-rotating disc, with the center of its hub mounted in the middle of a Z-shaped shaft. The two ends of the shaft rotate, while the disc "nutates" (performs a wobbling motion without rotating around its axis). The motion of the disc circumference describes a portion of a sphere. A portion of the area of the disc is used for intake and compression, a portion is used to seal against a center casing, and the remaining portion is used for expansion and exhaust. The compressed air is admitted to an external accumulator, and then into an external combustion chamber before it is admitted to the power side of the disc. The external combustion chamber enables the engine to use diesel fuel in small engine sizes, giving it unique capabilities for unmanned aerial vehicle propulsion and other applications. One significant benefit of the nutating engine is the overlap of the power strokes.
Power is transmitted directly to the output shaft (the crankshaft), completely eliminating the need for complicated linkages essential in a conventional piston engine (to convert the piston's linear motion to rotating output motion). Since the disc does not rotate, the seal velocities are lower than in an equivalent IC piston engine. The total seal length is rather long, however, which may negate this advantage.
The disc wobbles inside a housing and, in its simplest version, half of the single disc (one lobe) performs the intake/compression function while the other lobe performs the power/exhaust function. The disc lobes can be configured to have equal compression and expansion volumes, or to have the compression volume greater than or less than the expansion volume. This means that the engine can be self supercharged (see supercharger), or operate as a Miller cycle / Atkinson cycle.
Patents and production history
U.S. patent number 5,251,594 was granted to Leonard Meyer of Illinois in 1993 for a "nutating internal combustion disc engine". The Meyer Nutating Engine is a new type of internal combustion engine with higher power density than conventional reciprocating piston engines and which can operate on a variety of fuels, including gasoline, heavy fuels and hydrogen. The patent made reference to various 20th-century nutating engines in the United States, but no reference at all to the original Dakeyne engine, described below, in its prior art. The similarity to its 166-year-old hydraulic predecessor is strikingly evident, the main change being that the disc is not entirely flat but slightly convex.
The details of operation and potential of the Meyer nutating disk engine have been described by Professor T. Alexander (publishes as T. Korakianitis) and co-workers.
A single prototype has been run briefly under its own power, with a power- to-weight ratio equal to those of typical current four-stroke engines. It is claimed by the authors of the developer/US Army Research Laboratory/NASA technical evaluation report that a production version of the new engine (for UAV applications) might provide a power-to-weight ratio of 1.6 hp/lb or 2.7 kW/kg. This is slightly better than current automotive production engines but nowhere near the Graupner G58 or the Desert Air DA 150.
A company called McMasters, previously headed by successful American entrepreneur Harold McMaster, is also developing a nutating motor burning a mixture of pure hydrogen and pure oxygen that, it claims, will give 200 hp but weigh only one-tenth that of gasoline/air production automotive engines with the same output. So far the McMasters company claims to have spent $10 million on its development. Plans are also being made to develop a version "the size of a coffee can" that can be built directly into wheel hubs, eliminating the traditional drive train entirely. This concept was first attempted in the British Leyland Mini Moke but was, at that time, severely hampered by lack of reliable synchronization – which is now more commonplace because of ubiquitous miniaturized embedded modern-day computer chips. A gasoline-powered version is also planned by McMasters, which is claimed to give substantially cleaner operation than traditional engines.
History
Dakeyne hydraulic disc engine
In the 1820s the mill owners Edward and James Dakeyne of Darley Dale, Derbyshire, designed and had constructed a hydraulic engine (a water engine) known as "The Romping Lion", based on the same principles, to make use of the high-pressure water available near their mill. Little is known of their engine other than from the somewhat unclear description accompanying the patent, which was granted in 1830. Its main castings were made at the Morley Park foundry near Heage, and it weighed 7 tons and generated 35 horsepower at a head of 96 feet of water. Frank Nixon in his book "The Industrial Archaeology of Derbyshire" (1969) commented that "The most striking characteristic of this ingenious machine is perhaps the difficulty experienced by those trying to describe it; the patentees & Stephen Glover only succeeded in producing descriptions of monumental incomprehensibility".
A larger model was constructed to drain lead mines at Alport near Youlgreave and many steam versions were subsequently built by other people.
Davies and Taylor
The first people to develop steam-powered disc engines based on the Dakeynes' design were George Davies and Henry Taylor who patented their engine in 1836. It was fitted with valves to control the admission of steam and also differed from the Dakeynes' version in that the axis of the engine was horizontal and the casing of the engine rotated around the disc, the opposite of the original. More patents followed over the next eight years, mainly introducing expansive working and improving the engine's sealing.
In 1836 Davies and Taylor granted manufacturing rights for the engine to Fardon and Gossage, owners of a salt works. At the same time Davies was working on a canal tug with a disc engine driving a paddle wheel at the stern. By 1838 a 5 hp engine was in use at the salt works pumping brine.
In 1839 Davies, Taylor, Fardon and Gossage conveyed manufacturing rights to the engine to the Birmingham Patent Disc Engine company. As Superintendent of the Company, Henry Davies was responsible for all design and manufacture, while Gossage was a director. In February 1841 the Board reported that 26 engines had been completed, further engines totalling 260 horsepower were in progress, and a total of 500 horsepower were on order. They could make engines ranging from 5 to 30 horsepower and were currently making engines for a railway carriage. An article in a French journal of 1841 reported that a 12 hp engine had been in use for six months as a winding engine at Corbyn's Hall Mine, Dudley, which could lift a load of 1 ton 180 ft in 1 minute. The disc engines cost from £96 for an 8 hp machine to £300 for a 30 hp model.
Ransomes of Ipswich (who were later to become the well-known agricultural engineers Ransomes and Sims) exhibited a portable steam engine at the Royal Liverpool Show in 1841, powered by a 5 hp BPDE disc engine.
By 1840 a canal boat, The Experiment, powered by a Davies engine, was being used for propeller testing, and in 1842 Davies installed a disc engine and disc pump in a canal barge which he demonstrated by draining half a mile of the Stourbridge canal. The same year, a 5 hp engine was fitted in one of HMS Geyser's pinnaces. However, trials on the Thames and for the Directors of the Grand Junction Canal failed to convince either the Admiralty or the canal owners.
Nevertheless, there was a growing interest in using steam power on the canals, and the small beam of canal boats very much favoured disc engines. Davies saw his opportunity and built an iron-hulled canal tug with a 16 hp BPDE engine in 1843. To minimise wash he fitted four propellers spaced along a shaft the length of the boat and enclosed in a tube below the waterline. There were two of these propulsion units side by side for a total of 8 propellers. It worked well enough to convince the Directors of the Birmingham and Liverpool Junction Canal to order six tugs which could tow as many as sixteen barges a day at a reasonable speed. In use, a train of six to eight barges left Ellesmere Port and Wolverhampton each day, carrying an average of 100 tons. Unfortunately nobody had considered how the barge train was to transit through the canal locks and shallows. Each such obstruction meant that the train had to be uncoupled and the barges individually manhandled or towed by horse through the obstruction before the train was reassembled on the other side. This negated the benefits of the tug and train and in 1845 the canal's Directors removed the tugs from service.
In 1844 the BPDE collapsed. The workshop equipment, various completed engines and quantities of work in progress were offered for sale. During legal proceedings in 1851 following the bankruptcy of two of the BPDE's principal investors, it was said that the disc engine had not made a profit and that to have relied on it as a realisable asset "was absurd".
Bishopp
A competitor to Davies and Taylor was former locomotive engineer George Daniell Bishopp, who had Donkin & Co build his first engine in 1840, and a patent was granted in 1845. The partners Barnard William Farey and Bryan Donkin Jr. patented improvements to the basic design; Donkin had worked with Bishopp on his original engine, while Farey was an employee of Donkins.
Bishopp's engine met with some scepticism from the trade press when it was launched on the market. But Bishopp had opted to revert to the Dakeynes' original design which had a yoke which took most of the dynamic forces and greatly reduced the load on the bearings and seals. In the event that there was any leakage, the seals were adjustable. In addition, Bishopp had his engines produced by companies with recognised engineering capabilities rather than carrying out his own manufacturing; as well as Donkin's, some of his first engines were built by Joseph Whitworth & Co of Manchester. Another engineering company with a very good reputation was G. Rennie and Son of London who were so convinced of the engine's potential that in 1849 they employed Bishopp as their foreman of works with specific responsibility for the disc engine.
By 1849 a number of Bishopp engines had been sold, and one was used with great success to run the printing presses of the Times newspaper, while another produced by G. Rennie and Son was used to power the iron gunboat HMS Minx. The Times engine had been built by Whitworth and had been shown at the Great Exhibition of 1851 where it ran smoothly and quietly and impressed all who saw it.
In 1853 a disc engine 13 inches in diameter was purchased from Rennie to propel a 55 foot Russian gunboat, which it did at a speed of .
At the time the advantages of the disc engine were listed in 1855 by The Mechanics' Magazine as:
It was as much as half the weight of a conventional steam engine of equivalent power
It had the advantages of rotary steam engines without their inconvenience
It was more economical in terms of fuel: as much as 18%
It was capable of higher RPM without needing gearing
It was suited to high-pressure use
Disc engines ultimately fell into disuse because of competition from modern high-speed steam engines, which were small and light and could offer features such as compounding. Additionally, conventional engines did not require the same precision manufacture as disc engines and steam leakage was not a problem.
Water meters
The nutating disc meter, which uses the same geometry and concept as the Dakeynes' original engine, is probably the most widely used flowmeter in the world, and it is claimed that more than half the water meters installed in domestic premises in the US and Europe are of this type. Used for 150 years, it is essentially a Dakeyne Disc Engine and was most probably developed by Farey and Donkin who mentioned a "fluid measurement meter" in their 1850 disc engine patent granted in 1850. By 1859 they were being manufactured by the Buffalo Meter Company of Buffalo, New York.
See also
Dakeyne hydraulic disc engine
References
External links
Patent document from USPTO
The Romping Lion - the story of the Dakeyne Disc Engine
Description of engine - Cornell university and links to three illustrations, one from The Mechanics Magazine, 1833.
article re: Len Meyer/ Baker engineering Inc. contract to develop engine
Engineering TV article
Animation 1 of McMaster Engine
Animation 2. of McMaster Engine
History
Inventors - The Romping Lion, Peakland Heritage site
The Dakeyne brothers. Thurston, "History of the Growth of the Steam Engine"
Technical reports
Full US Army test report and test results (PDF)
Nasa Technical report ID:20060056193
Kinetic-BEI innovation award 2008 (PDF)
Proposed engines
Engines | Nutating disc engine | [
"Physics",
"Technology"
] | 2,746 | [
"Physical systems",
"Proposed engines",
"Machines",
"Engines"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.