id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
7,148,738 | https://en.wikipedia.org/wiki/Molecular%20Hamiltonian | In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity, specific heat, electrical conductivity, optical, and magnetic properties, and reactivity.
The elementary parts of a molecule are the nuclei, characterized by their atomic numbers, Z, and the electrons, which have negative elementary charge, −e. Their interaction gives a nuclear charge of Z + q, where , with N equal to the number of electrons. Electrons and nuclei are, to a very good approximation, point charges and point masses. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. The Hamiltonian that contains only the kinetic energies of electrons and nuclei, and the Coulomb interactions between them, is known as the Coulomb Hamiltonian. From it are missing a number of small terms, most of which are due to electronic and nuclear spin.
Although it is generally assumed that the solution of the time-independent Schrödinger equation associated with the Coulomb Hamiltonian will predict most properties of the molecule, including its shape (three-dimensional structure), calculations based on the full Coulomb Hamiltonian are very rare. The main reason is that its Schrödinger equation is very difficult to solve. Applications are restricted to small systems like the hydrogen molecule.
Almost all calculations of molecular wavefunctions are based on the separation of the Coulomb Hamiltonian first devised by Born and Oppenheimer. The nuclear kinetic energy terms are omitted from the Coulomb Hamiltonian and one considers the remaining Hamiltonian as a Hamiltonian of electrons only. The stationary nuclei enter the problem only as generators of an electric potential in which the electrons move in a quantum mechanical way. Within this framework the molecular Hamiltonian has been simplified to the so-called clamped nucleus Hamiltonian, also called electronic Hamiltonian, that acts only on functions of the electronic coordinates.
Once the Schrödinger equation of the clamped nucleus Hamiltonian has been solved for a sufficient number of constellations of the nuclei, an appropriate eigenvalue (usually the lowest) can be seen as a function of the nuclear coordinates, which leads to a potential energy surface. In practical calculations the surface is usually fitted in terms of some analytic functions. In the second step of the Born–Oppenheimer approximation the part of the full Coulomb Hamiltonian that depends on the electrons is replaced by the potential energy surface. This converts the total molecular Hamiltonian into another Hamiltonian that acts only on the nuclear coordinates. In the case of a breakdown of the Born–Oppenheimer approximation—which occurs when energies of different electronic states are close—the neighboring potential energy surfaces are needed, see this article for more details on this.
The nuclear motion Schrödinger equation can be solved in a space-fixed (laboratory) frame, but then the translational and rotational (external) energies are not accounted for. Only the (internal) atomic vibrations enter the problem. Further, for molecules larger than triatomic ones, it is quite common to introduce the harmonic approximation, which approximates the potential energy surface as a quadratic function of the atomic displacements. This gives the harmonic nuclear motion Hamiltonian. Making the harmonic approximation, we can convert the Hamiltonian into a sum of uncoupled one-dimensional harmonic oscillator Hamiltonians. The one-dimensional harmonic oscillator is one of the few systems that allows an exact solution of the Schrödinger equation.
Alternatively, the nuclear motion (rovibrational) Schrödinger equation can be solved in a special frame (an Eckart frame) that rotates and translates with the molecule. Formulated with respect to this body-fixed frame the Hamiltonian accounts for rotation, translation and vibration of the nuclei. Since Watson introduced in 1968 an important simplification to this Hamiltonian, it is often referred to as Watson's nuclear motion Hamiltonian, but it is also known as the Eckart Hamiltonian.
Coulomb Hamiltonian
The algebraic form of many observables—i.e., Hermitian operators representing observable quantities—is obtained by the following quantization rules:
Write the classical form of the observable in Hamilton form (as a function of momenta p and positions q). Both vectors are expressed with respect to an arbitrary inertial frame, usually referred to as laboratory-frame or space-fixed frame.
Replace p by and interpret q as a multiplicative operator. Here is the nabla operator, a vector operator consisting of first derivatives. The well-known commutation relations for the p and q operators follow directly from the differentiation rules.
Classically the electrons and nuclei in a molecule have kinetic energy of the form p2/(2 m) and
interact via Coulomb interactions, which are inversely proportional to the distance rij
between particle i and j.
In this expression ri stands for the coordinate vector of any particle (electron or nucleus), but from here on we will reserve capital R to represent the nuclear coordinate, and lower case r for the electrons of the system. The coordinates can be taken to be expressed with respect to any Cartesian frame centered anywhere in space, because distance, being an inner product, is invariant under rotation of the frame and, being the norm of a difference vector, distance is invariant under translation of the frame as well.
By quantizing the classical energy in Hamilton form one obtains the a molecular Hamilton operator that is often referred to as the Coulomb Hamiltonian. This Hamiltonian is a sum of five terms. They are
The kinetic energy operators for each nucleus in the system;
The kinetic energy operators for each electron in the system;
The potential energy between the electrons and nuclei – the total electron-nucleus Coulombic attraction in the system;
The potential energy arising from Coulombic electron-electron repulsions
The potential energy arising from Coulombic nuclei-nuclei repulsions – also known as the nuclear repulsion energy. See electric potential for more details.
Here Mi is the mass of nucleus i, Zi is the atomic number of nucleus i, and me is the mass of the electron. The Laplace operator of particle i is:. Since the kinetic energy operator is an inner product, it is invariant under rotation of the Cartesian frame with respect to which xi, yi, and zi are expressed.
Small terms
In the 1920s much spectroscopic evidence made it clear that the Coulomb Hamiltonian is missing certain terms. Especially for molecules containing heavier atoms, these terms, although much smaller than kinetic and Coulomb energies, are nonnegligible. These spectroscopic observations led to the introduction of a new degree of freedom for electrons and nuclei, namely spin. This empirical concept was given a theoretical basis by Paul Dirac when he introduced a relativistically correct (Lorentz covariant) form of the one-particle Schrödinger equation. The Dirac equation predicts that spin and spatial motion of a particle interact via spin–orbit coupling. In analogy spin-other-orbit coupling was introduced. The fact that particle spin has some of the characteristics of a magnetic dipole led to spin–spin coupling. Further terms without a classical counterpart are the Fermi-contact term (interaction of electronic density on a finite size nucleus with the nucleus), and nuclear quadrupole coupling (interaction of a nuclear quadrupole with the gradient of an electric field due to the electrons). Finally a parity violating term predicted by the Standard Model must be mentioned. Although it is an extremely small interaction, it has attracted a fair amount of attention in the scientific literature because it gives different energies for the enantiomers in chiral molecules.
The remaining part of this article will ignore spin terms and consider the solution of the eigenvalue (time-independent Schrödinger) equation of the Coulomb Hamiltonian.
The Schrödinger equation of the Coulomb Hamiltonian
The Coulomb Hamiltonian has a continuous spectrum due to the center of mass (COM) motion of the molecule in homogeneous space. In classical mechanics it is easy to separate off the COM motion of a system of point masses. Classically the motion of the COM is uncoupled from the other motions. The COM moves uniformly (i.e., with constant velocity) through space as if it were a point particle with mass equal to the sum Mtot of the masses of all the particles.
In quantum mechanics a free particle has as state function a plane wave function, which is a non-square-integrable function of well-defined momentum. The kinetic energy
of this particle can take any positive value. The position of the COM is uniformly probable everywhere, in agreement with the Heisenberg uncertainty principle.
By introducing the coordinate vector X of the center of mass as three of the degrees of freedom of the system and eliminating the coordinate vector of one (arbitrary) particle, so that the number of degrees of freedom stays the same, one obtains by a linear transformation a new set of coordinates ti. These coordinates are linear combinations of the old coordinates of all particles (nuclei and electrons). By applying the chain rule one can show that
The first term of is the kinetic energy of the COM motion, which can be treated separately since does not depend on X. As just stated, its eigenstates are plane waves. The potential V(t) consists of the Coulomb terms expressed in the new coordinates. The first term of has the usual appearance of a kinetic energy operator. The second term is known as the mass polarization term. The translationally invariant Hamiltonian can be shown to be self-adjoint and to be bounded from below. That is, its lowest eigenvalue is real and finite. Although is necessarily invariant under permutations of identical particles (since and the COM kinetic energy are invariant), its invariance is not manifest.
Not many actual molecular applications of exist; see, however, the seminal work on the hydrogen molecule for an early application. In the great majority of computations of molecular wavefunctions the electronic
problem is solved with the clamped nucleus Hamiltonian arising in the first step of the Born–Oppenheimer approximation.
See Ref. for a thorough discussion of the mathematical properties of the Coulomb Hamiltonian. Also it is discussed in this paper whether one can arrive a priori at the concept of a molecule (as a stable system of electrons and nuclei with a well-defined geometry) from the properties of the Coulomb Hamiltonian alone.
Clamped nucleus Hamiltonian
The clamped nucleus Hamiltonian, which is also often called the electronic Hamiltonian, describes the energy of the electrons in the electrostatic field of the nuclei, where the nuclei are assumed to be stationary with respect to an inertial frame.
The form of the electronic Hamiltonian is
The coordinates of electrons and nuclei are expressed with respect to a frame that moves with the nuclei, so that the nuclei are at rest with respect to this frame. The frame stays parallel to a space-fixed frame. It is an inertial frame because the nuclei are assumed not to be accelerated by external forces or torques. The origin of the frame is arbitrary, it is usually positioned on a central nucleus or in the nuclear center of mass. Sometimes it is stated that the nuclei are "at rest in a space-fixed frame". This statement implies that the nuclei are viewed as classical particles, because a quantum mechanical particle cannot be at rest. (It would mean that it had simultaneously zero momentum and well-defined position, which contradicts Heisenberg's uncertainty principle).
Since the nuclear positions are constants, the electronic kinetic energy operator is invariant under translation over any nuclear vector. The Coulomb potential, depending on difference vectors, is invariant as well. In the description of atomic orbitals and the computation of integrals over atomic orbitals this invariance is used by equipping all atoms in the molecule with their own localized frames parallel to the space-fixed frame.
As explained in the article on the Born–Oppenheimer approximation, a sufficient number of solutions of the Schrödinger equation of leads to a potential energy surface (PES) . It is assumed that the functional dependence of V on its coordinates is such that
for
where t and s are arbitrary vectors and Δφ is an infinitesimal angle,
Δφ >> Δφ2. This invariance condition on the PES is automatically fulfilled when the PES is expressed in terms of differences of, and angles between, the Ri, which is usually the case.
Harmonic nuclear motion Hamiltonian
In the remaining part of this article we assume that the molecule is semi-rigid. In the second step of the BO approximation the nuclear kinetic energy Tn is reintroduced and the Schrödinger equation with Hamiltonian
is considered. One would like to recognize in its solution: the motion of the nuclear center of mass (3 degrees of freedom), the overall rotation of the molecule (3 degrees of freedom), and the nuclear vibrations. In general, this is not possible with the given nuclear kinetic energy, because it does not separate explicitly the 6 external degrees of freedom (overall translation and rotation) from the 3N − 6 internal degrees of freedom. In fact, the kinetic energy operator here is defined with respect to a space-fixed (SF) frame. If we were to move the origin of the SF frame to the nuclear center of mass, then, by application of the chain rule, nuclear mass polarization terms would appear. It is customary to ignore these terms altogether and we will follow this custom.
In order to achieve a separation we must distinguish internal and external coordinates, to which end Eckart introduced conditions to be satisfied by the coordinates. We will show how these conditions arise in a natural way from a harmonic analysis in mass-weighted Cartesian coordinates.
In order to simplify the expression for the kinetic energy we introduce mass-weighted displacement coordinates
.
Since
the kinetic energy operator becomes,
If we make a Taylor expansion of V around the equilibrium geometry,
and truncate after three terms (the so-called harmonic approximation), we can describe V with only the third term. The term V0 can be absorbed in the energy (gives a new zero of energy). The second term is vanishing because of the equilibrium condition. The remaining term contains the Hessian matrix F of V, which is symmetric and may be diagonalized with an orthogonal 3N × 3N matrix with constant elements:
It can be shown from the invariance of V under rotation and translation that six of the eigenvectors of F (last six rows of Q) have eigenvalue zero (are zero-frequency modes). They span the external space. The first rows of Q are—for molecules in their ground state—eigenvectors with non-zero eigenvalue; they are the internal coordinates and form an orthonormal basis for a (3N - 6)-dimensional subspace of
the nuclear configuration space R3N, the internal space. The zero-frequency eigenvectors are orthogonal to the eigenvectors of non-zero frequency. It can be shown that these orthogonalities are in fact the Eckart conditions. The kinetic energy expressed in the internal coordinates is the internal (vibrational) kinetic energy.
With the introduction of normal coordinates
the vibrational (internal) part of the Hamiltonian for the nuclear motion becomes in the harmonic approximation
The corresponding Schrödinger equation is easily solved, it factorizes into 3N − 6 equations for one-dimensional harmonic oscillators. The main effort in this approximate solution of the nuclear motion Schrödinger equation is the computation of the Hessian F of V and its diagonalization.
This approximation to the nuclear motion problem, described in 3N mass-weighted Cartesian coordinates, became standard in quantum chemistry, since the days (1980s-1990s) that algorithms for accurate computations of the Hessian F became available. Apart from the harmonic approximation, it has as a further deficiency that the external (rotational and translational) motions of the molecule are not accounted for. They are accounted for in a rovibrational Hamiltonian that sometimes is called Watson's Hamiltonian.
Watson's nuclear motion Hamiltonian
In order to obtain a Hamiltonian for external (translation and rotation) motions coupled to the internal (vibrational) motions, it is common to return at this point to classical mechanics and to formulate the classical kinetic energy corresponding to these motions of the nuclei. Classically it is easy to separate the translational—center of mass—motion from the other motions. However, the separation of the rotational from the vibrational motion is more difficult and is not completely possible. This ro-vibrational separation was first achieved by Eckart in 1935 by imposing by what is now known as Eckart conditions. Since the problem is described in a frame (an "Eckart" frame) that rotates with the molecule, and hence is a non-inertial frame, energies associated with the fictitious forces: centrifugal and Coriolis force appear in the kinetic energy.
In general, the classical kinetic energy T defines the metric tensor g = (gij) associated with the curvilinear coordinates s = (si) through
The quantization step is the transformation of this classical kinetic energy into a quantum mechanical operator. It is common to follow Podolsky by writing down the Laplace–Beltrami operator in the same (generalized, curvilinear) coordinates s as used for the classical form. The equation for this operator requires the inverse of the metric tensor g and its determinant. Multiplication of the Laplace–Beltrami operator by gives the required quantum mechanical kinetic energy operator. When we apply this recipe to Cartesian coordinates, which have unit metric, the same kinetic energy is obtained as by application of the quantization rules.
The nuclear motion Hamiltonian was obtained by Wilson and Howard in 1936, who followed this procedure, and further refined by Darling and Dennison in 1940. It remained the standard until 1968, when Watson was able to simplify it drastically by commuting through the derivatives the determinant of the metric tensor. We will give the ro-vibrational Hamiltonian obtained by Watson, which often is referred to as the Watson Hamiltonian. Before we do this we must mention
that a derivation of this Hamiltonian is also possible by starting from the Laplace operator in Cartesian form, application of coordinate transformations, and use of the chain rule.
The Watson Hamiltonian, describing all motions of the N nuclei, is
The first term is the center of mass term
The second term is the rotational term akin to the kinetic energy of the rigid rotor. Here
is the α component of the body-fixed rigid rotor angular momentum operator,
see this article for its expression in terms of Euler angles. The operator is a component of an operator known
as the vibrational angular momentum operator (although it does not satisfy angular momentum commutation relations),
with the Coriolis coupling constant:
Here is the Levi-Civita symbol. The terms quadratic in the are centrifugal terms, those bilinear in and are Coriolis terms. The quantities Q s, iγ are the components of the normal coordinates introduced above. Alternatively, normal coordinates may be obtained by application of Wilson's GF method. The 3 × 3 symmetric matrix is called the effective reciprocal inertia tensor. If all q s were zero (rigid molecule) the Eckart frame would coincide with a principal axes frame (see rigid rotor) and would be diagonal, with the equilibrium reciprocal moments of inertia on the diagonal. If all q s would be zero, only the kinetic energies of translation and rigid rotation would survive.
The potential-like term U is the Watson term:
proportional to the trace of the effective reciprocal inertia tensor.
The fourth term in the Watson Hamiltonian is the kinetic energy associated with the vibrations of the atoms (nuclei) expressed in normal coordinates qs, which as stated above, are given in terms of nuclear displacements ρiα by
Finally V is the unexpanded potential energy by definition depending on internal coordinates only. In the harmonic approximation it takes the form
See also
Quantum chemistry computer programs
Adiabatic process (quantum mechanics)
Franck–Condon principle
Born–Oppenheimer approximation
GF method
Eckart conditions
Rigid rotor
References
Further reading
A readable and thorough discussion on the spin terms in the molecular Hamiltonian is in:
Molecular physics
Quantum chemistry
Spectroscopy | Molecular Hamiltonian | [
"Physics",
"Chemistry"
] | 4,292 | [
"Quantum chemistry",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Spectroscopy",
" and optical physics"
] |
7,148,964 | https://en.wikipedia.org/wiki/Palomar%205 | Palomar 5 is a globular cluster and a member of the Palomar Globular Clusters group. It was discovered by Walter Baade in 1950, and independently found again by Albert George Wilson in 1955. After the initial name of Serpens, it was subsequently catalogued as Palomar 5.
There is a process of disruption acting on this cluster because of the gravitation of the Milky Way – in fact there are many stars leaving this cluster in the form of a stellar stream. The stream has a mass of 5000 solar masses and is 30,000 light years long. The cluster is currently from the Galactic Center. It shows a noticeable amount of flattening, with an aspect ratio of between its semimajor axis and semiminor axis.
See also
List of globular clusters
List of stellar streams
References
External links
SEDS Palomar 5
Palomar 05
Palomar 05
09792
Palomar 5 Stream | Palomar 5 | [
"Astronomy"
] | 191 | [
"Constellations",
"Serpens"
] |
7,149,012 | https://en.wikipedia.org/wiki/Factorization%20system | In mathematics, it can be shown that every function can be written as the composite of a surjective function followed by an injective function. Factorization systems are a generalization of this situation in category theory.
Definition
A factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that:
E and M both contain all isomorphisms of C and are closed under composition.
Every morphism f of C can be factored as for some morphisms and .
The factorization is functorial: if and are two morphisms such that for some morphisms and , then there exists a unique morphism making the following diagram commute:
Remark: is a morphism from to in the arrow category.
Orthogonality
Two morphisms and are said to be orthogonal, denoted , if for every pair of morphisms and such that there is a unique morphism such that the diagram
commutes. This notion can be extended to define the orthogonals of sets of morphisms by
and
Since in a factorization system contains all the isomorphisms, the condition (3) of the definition is equivalent to
(3') and
Proof: In the previous diagram (3), take (identity on the appropriate object) and .
Equivalent definition
The pair of classes of morphisms of C is a factorization system if and only if it satisfies the following conditions:
Every morphism f of C can be factored as with and
and
Weak factorization systems
Suppose e and m are two morphisms in a category C. Then e has the left lifting property with respect to m (respectively m has the right lifting property with respect to e) when for every pair of morphisms u and v such that ve = mu there is a morphism w such that the following diagram commutes. The difference with orthogonality is that w is not necessarily unique.
A weak factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that:
The class E is exactly the class of morphisms having the left lifting property with respect to each morphism in M.
The class M is exactly the class of morphisms having the right lifting property with respect to each morphism in E.
Every morphism f of C can be factored as for some morphisms and .
This notion leads to a succinct definition of model categories: a model category is a pair consisting of a category C and classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that
C has all limits and colimits,
is a weak factorization system,
is a weak factorization system, and
satisfies the two-out-of-three property: if and are composable morphisms and two of are in , then so is the third.
A model category is a complete and cocomplete category equipped with a model structure. A map is called a trivial fibration if it belongs to and it is called a trivial cofibration if it belongs to An object is called fibrant if the morphism to the terminal object is a fibration, and it is called cofibrant if the morphism from the initial object is a cofibration.
References
External links
Category theory | Factorization system | [
"Mathematics"
] | 708 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
7,149,215 | https://en.wikipedia.org/wiki/Sun%20photometer | A Sun photometer is a type of photometer conceived in such a way that it points at the Sun. Recent sun photometers are automated instruments incorporating a Sun-tracking unit, an appropriate optical system, a spectrally filtering device, a photodetector, and a data acquisition system. The measured quantity is called direct-sun radiance.
When a Sun-photometer is placed somewhere within the Earth's atmosphere, the measured radiance is not equal to the radiance emitted by the Sun (i.e. the solar extraterrestrial radiance), because the solar flux is reduced by atmospheric absorption and scattering. Therefore, the measured radiant flux is due to a combination of what is emitted by the Sun and the effect of the atmosphere; the link between these quantities is given by Beer's law.
The atmospheric effect can be removed with Langley extrapolation; this method therefore allows measuring the solar extraterrestrial radiance with ground-based measurements. Once the extraterrestrial radiance is known, one can use the Sun photometer for studying the atmosphere, and in particular for determining the atmospheric optical depth. Also, if the signal at two or more suitably selected spectral intervals is measured, one can use the information derived for calculating the vertically integrated concentration of selected atmospheric gases, such as water vapour, ozone, etc.
See also
Dobson ozone spectrophotometer
AERONET
References
Glenn E. Shaw, "Sun photometry", Bulletin of the American Meteorological Society 64, 4-10, 1983.
Optical devices
Electromagnetic radiation meters | Sun photometer | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 325 | [
"Glass engineering and science",
"Optical devices",
"Spectrum (physical sciences)",
"Electromagnetic radiation meters",
"Electromagnetic spectrum",
"Measuring instruments"
] |
7,149,361 | https://en.wikipedia.org/wiki/Regular%20dodecahedron | A regular dodecahedron or pentagonal dodecahedron is a dodecahedron composed of regular pentagonal faces, three meeting at each vertex. It is an example of Platonic solids, described as cosmic stellation by Plato in his dialogues, and it was used as part of Solar System proposed by Johannes Kepler. However, the regular dodecahedron, including the other Platonic solids, has already been described by other philosophers since antiquity.
The regular dodecahedron is the family of truncated trapezohedron because it is the result of truncating axial vertices of a pentagonal trapezohedron. It is also a Goldberg polyhedron because it is the initial polyhedron to construct new polyhedrons by the process of chamfering. It has a relation with other Platonic solids, one of them is the regular icosahedron as its dual polyhedron. Other new polyhedrons can be constructed by using regular dodecahedron.
The regular dodecahedron's metric properties and construction are associated with the golden ratio. The regular dodecahedron can be found in many popular cultures: Roman dodecahedron, the children's story, toys, and painting arts. It can also be found in nature and supramolecules, as well as the shape of the universe. The skeleton of a regular dodecahedron can be represented as the graph called the dodecahedral graph, a Platonic graph. Its property of the Hamiltonian, a path visits all of its vertices exactly once, can be found in a toy called icosian game.
As a Platonic solid
The regular dodecahedron is a polyhedron with twelve pentagonal faces, thirty edges, and twenty vertices. It is one of the Platonic solids, a set of polyhedrons in which the faces are regular polygons that are congruent and the same number of faces meet at a vertex. This set of polyhedrons is named after Plato. In Theaetetus, a dialogue of Plato, Plato hypothesized that the classical elements were made of the five uniform regular solids. Plato described the regular dodecahedron, obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven". Timaeus, as a personage of Plato's dialogue, associates the other four Platonic solids—regular tetrahedron, cube, regular octahedron, and regular icosahedron—with the four classical elements, adding that there is a fifth solid pattern which, though commonly associated with the regular dodecahedron, is never directly mentioned as such; "this God used in the delineation of the universe." Aristotle also postulated that the heavens were made of a fifth element, which he called aithêr (aether in Latin, ether in American English).
Following its attribution with nature by Plato, Johannes Kepler in his Harmonices Mundi sketched each of the Platonic solids, one of them is a regular dodecahedron. In his Mysterium Cosmographicum, Kepler also proposed the Solar System by using the Platonic solids setting into another one and separating them with six spheres resembling the six planets. The ordered solids started from the innermost to the outermost: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube.
Many antiquity philosophers described the regular dodecahedron, including the rest of the Platonic solids. Theaetetus gave a mathematical description of all five and may have been responsible for the first known proof that no other convex regular polyhedra exist. Euclid completely mathematically described the Platonic solids in the Elements, the last book (Book XIII) of which is devoted to their properties. Propositions 13–17 in Book XIII describe the construction of the tetrahedron, octahedron, cube, icosahedron, and dodecahedron in that order. For each solid, Euclid finds the ratio of the diameter of the circumscribed sphere to the edge length. In Proposition 18 he argues that there are no further convex regular polyhedra. Iamblichus states that Hippasus, a Pythagorean, perished in the sea, because he boasted that he first divulged "the sphere with the twelve pentagons".
Relation to the regular icosahedron
The dual polyhedron of a dodecahedron is the regular icosahedron. One property of the dual polyhedron generally is that the original polyhedron and its dual share the same three-dimensional symmetry group. In the case of the regular dodecahedron, it has the same symmetry as the regular icosahedron, the icosahedral symmetry . The regular dodecahedron has ten three-fold axes passing through pairs of opposite vertices, six five-fold axes passing through the opposite faces centers, and fifteen two-fold axes passing through the opposite sides midpoints.
When a regular dodecahedron is inscribed in a sphere, it occupies more of the sphere's volume (66.49%) than an icosahedron inscribed in the same sphere (60.55%). The resulting of both spheres' volumes initially began from the problem by ancient Greeks, determining which of two shapes has a larger volume: an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero of Alexandria, Pappus of Alexandria, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio but are taken to different powers.
Golden rectangle may also related to both regular icosahedron and regular dodecahedron. The regular icosahedron can be constructed by intersecting three golden rectangles perpendicularly, arranged in two-by-two orthogonal, and connecting each of the golden rectangle's vertices with a segment line. There are 12 regular icosahedron vertices, considered as the center of 12 regular dodecahedron faces.
Relation to the regular tetrahedron
As two opposing tetrahedra can be inscribed in a cube, and five cubes can be inscribed in a dodecahedron, ten tetrahedra in five cubes can be inscribed in a dodecahedron: two opposing sets of five, with each set covering all 20 vertices and each vertex in two tetrahedra (one from each set, but not the opposing pair). As quoted by ,
Configuration matrix
The configuration matrix is a matrix in which the rows and columns correspond to the elements of a polyhedron as in the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element. The regular dodecahedron can be represented in the following matrix:
Relation to the golden ratio
The golden ratio is the ratio between two numbers equal to the ratio of their sum to the larger of the two quantities. It is one of two roots of a polynomial, expressed as . The golden ratio can be applied to the regular dodecahedron's metric properties, as well as to construct the regular dodecahedron.
The surface area and the volume of a regular dodecahedron of edge length are:
The following Cartesian coordinates define the twenty vertices of a regular dodecahedron centered at the origin and suitably scaled and oriented:
If the edge length of a regular dodecahedron is , the radius of a circumscribed sphere (one that touches the regular dodecahedron at all vertices), the radius of an inscribed sphere (tangent to each of the regular dodecahedron's faces), and the midradius (one that touches the middle of each edge) are:
Given a regular dodecahedron of edge length one, is the radius of a circumscribing sphere about a cube of edge length , and is the apothem of a regular pentagon of edge length .
The dihedral angle of a regular dodecahedron between every two adjacent pentagonal faces is , approximately 116.565°.
Other related geometric objects
The regular dodecahedron can be interpreted as a truncated trapezohedron. It is the set of polyhedrons that can be constructed by truncating the two axial vertices of a trapezohedron. Here, the regular dodecahedron is constructed by truncating the pentagonal trapezohedron.
The regular dodecahedron can be interpreted as the Goldberg polyhedron. It is a set of polyhedrons containing hexagonal and pentagonal faces. Other than two Platonic solids—tetrahedron and cube—the regular dodecahedron is the initial of Goldberg polyhedron construction, and the next polyhedron is resulted by truncating all of its edges, a process called chamfer. This process can be continuously repeated, resulting in more new Goldberg's polyhedrons. These polyhedrons are classified as the first class of a Goldberg polyhedron.
The stellations of the regular dodecahedron make up three of the four Kepler–Poinsot polyhedra. The first stellation of a regular dodecahedron is constructed by attaching its layer with pentagonal pyramids, forming a small stellated dodecahedron. The second stellation is by attaching the small stellated dodecahedron with wedges, forming a great dodecahedron. The third stellation is by attaching the great dodecahedron with the sharp triangular pyramids, forming a great stellated dodecahedron.
Appearances
In visual arts
Regular dodecahedra have been used as dice and probably also as divinatory devices. During the Hellenistic era, small hollow bronze Roman dodecahedra were made and have been found in various Roman ruins in Europe. Its purpose is not certain.
In 20th-century art, dodecahedra appear in the work of M. C. Escher, such as his lithographs Reptiles (1943) and Gravitation (1952). In Salvador Dalí's painting The Sacrament of the Last Supper (1955), the room is a hollow regular dodecahedron. Gerard Caris based his entire artistic oeuvre on the regular dodecahedron and the pentagon, presented as a new art movement coined as Pentagonism.
In toys and popular culture
In modern role-playing games, the regular dodecahedron is often used as a twelve-sided die, one of the more common polyhedral dice. The Megaminx twisty puzzle is shaped like a regular dodecahedron alongside its larger and smaller order analogues.
In the children's novel The Phantom Tollbooth, the regular dodecahedron appears as a character in the land of Mathematics. Each face of the regular dodecahedron describes the various facial expressions, swiveling to the front as required to match his mood.
In nature and supramolecules
The fossil coccolithophore Braarudosphaera bigelowii (see figure), a unicellular coastal phytoplanktonic alga, has a calcium carbonate shell with a regular dodecahedral structure about 10 micrometers across.
The hydrocarbon dodecahedrane, some quasicrystals and cages have dodecahedral shape (see figure). Some regular crystals such as garnet and diamond are also said to exhibit "dodecahedral" habit, but this statement actually refers to the rhombic dodecahedron shape.
Shape of the universe
Various models have been proposed for the global geometry of the universe. These proposals include the Poincaré dodecahedral space, a positively curved space consisting of a regular dodecahedron whose opposite faces correspond (with a small twist). This was proposed by Jean-Pierre Luminet and colleagues in 2003, and an optimal orientation on the sky for the model was estimated in 2008.
In Bertrand Russell's 1954 short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt", the number 5 said: "I am the number of fingers on a hand. I make pentagons and pentagrams. And but for me dodecahedra could not exist; and, as everyone knows, the universe is a dodecahedron. So, but for me, there could be no universe."
Dodecahedral graph
According to Steinitz's theorem, the graph can be represented as the skeleton of a polyhedron; roughly speaking, a framework of a polyhedron. Such a graph has two properties. It is planar, meaning the edges of a graph are connected to every vertex without crossing other edges. It is also three-connected graph, meaning that, whenever a graph with more than three vertices, and two of the vertices are removed, the edges remain connected. The skeleton of a regular dodecahedron can be represented as a graph, and it is called the dodecahedral graph, a Platonic graph.
This graph can also be constructed as the generalized Petersen graph , where the vertices of a decagon are connected to those of two pentagons, one pentagon connected to odd vertices of the decagon and the other pentagon connected to the even vertices. Geometrically, this can be visualized as the ten-vertex equatorial belt of the dodecahedron connected to the two 5-vertex polar regions, one on each side.
The high degree of symmetry of the polygon is replicated in the properties of this graph, which are distance-transitive, distance-regular, and symmetric. The automorphism group has order a hundred and twenty. The vertices can be colored with 3 colors, as can the edges, and the diameter is five.
The dodecahedral graph is Hamiltonian, meaning a path visits all of its vertices exactly once. The name of this property is named after William Rowan Hamilton, who invented a mathematical game known as the icosian game. The game's object was to find a Hamiltonian cycle along the edges of a dodecahedron.
Notes
See also
120-cell, a regular polychoron (4D polytope whose surface consists of a hundred and twenty dodecahedral cells)
− A dodecahedron shaped coccolithophore (a unicellular phytoplankton algae).
Dodecahedrane (molecule)
Icosahedral twins - Nanoparticles which can have the shape of a regular dodecahedron.
Pentakis dodecahedron
Snub dodecahedron
Truncated dodecahedron
References
External links
Editable printable net of a dodecahedron with interactive 3D view
The Uniform Polyhedra
Origami Polyhedra – Models made with Modular Origami
Dodecahedron – 3-d model that works in your browser
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
VRML#Regular dodecahedron
K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra
Dodecahedron 3D Visualization
Stella: Polyhedron Navigator: Software used to create some of the images on this page.
How to make a dodecahedron from a Styrofoam cube
The Greek, Indian, and Chinese Elements – Seven Element Theory
Goldberg polyhedra
Planar graphs
Platonic solids
12 (number) | Regular dodecahedron | [
"Mathematics"
] | 3,152 | [
"Planes (geometry)",
"Planar graphs"
] |
7,149,429 | https://en.wikipedia.org/wiki/Centro%20de%20Investigaci%C3%B3n%20de%20M%C3%A9todos%20Computacionales | Centro de Investigación de Métodos Computacionales (CIMEC, Research Center for Computational Methods) is a research institute located at Predio CONICET Santa Fe Santa Fe, Argentina.
It depends on the Universidad Nacional del Litoral, and on the National Scientific and Technical Research Council (CONICET). The main area of research is Computational mechanics, i.e. the application of numerical methods to various areas of engineering.
The center was formerly known as Centro Internacional de Métodos Computacionales en Ingeniería.
Additional information in the official site below.
External links
Official site
Research institutes in Argentina
Engineering research institutes
Santa Fe Province | Centro de Investigación de Métodos Computacionales | [
"Engineering"
] | 137 | [
"Engineering research institutes"
] |
7,149,503 | https://en.wikipedia.org/wiki/Arum%20italicum | Arum italicum is a species of flowering herbaceous perennial plant in the family Araceae, also known as Italian arum and Italian lords-and-ladies. It is native to the British Isles and much of the Mediterranean region, the Caucasus, Canary Islands, Madeira and northern Africa. It is also naturalized in Belgium, the Netherlands, Austria, Argentina, North Island New Zealand and scattered locations in North America.
Description
Arum italicum grows high, with equal spread. It blooms in spring with white flowers that turn to showy red fruit.
In 1778, Lamarck noticed that the inflorescence of this plant produces heat.
A. italicum generally has a chromosome count of 2n = 84, except that a few subspecies (such as subsp. albispathum) have 2n = 56.
Taxonomy
Within the genus, A. italicum belongs to subgenus Arum, section Arum.
Arum italicum may hybridize with Arum maculatum. The status of two subspecies currently included in Arum italicum, subsp. albispathum (Crimea to the Caucasus) and subsp. canariense (Macaronesia), is uncertain and they may represent independent species.
Distribution and habitat
Arum italicum nativity by subspecies is as follows:
A. italicum subsp. italicum is native to Albania, Algeria, Baleares, Bulgaria, Corse, Cyprus, France, Greece, Iraq, Italy, Kriti, Krym, Morocco, Portugal, Sardegna, Sicilia, Spain, Switzerland, Tunisia, Turkey, Turkey-in-Europe, and Yugoslavia.
A. italicum subsp. albispathum is native to Krym, North Caucasus, Transcaucasus, and Turkey.
A. italicum subsp. canariense is native to Azores, Canary Islands, and Madeira.
A. italicum subsp. neglectum is native to Algeria, France, Great Britain, Morocco, and Spain.
Subspecies italicum has a multi-continental introduced presence, including in northeast Argentina, Austria, Belgium, Germany, Great Britain, Ireland, the Netherlands, north New Zealand, and the U.S. states of Illinois, Maryland, Missouri, New York, and North Carolina.
Invasive species
Arum italicum can be invasive in some areas, particularly in the Pacific Northwest of the United States. It is very difficult to control once established. Herbicides kill the foliage of the plant, but may not affect the tuber. Manual control may spread the plants through the dissemination of soil contaminated with bulb and root fragments.
Toxicity
Leaves, fruits and rhizomes contain compounds that make them poisonous. Notably, the plants are rich in oxalates. The ingestion of the plant may be fatal, as it affects the kidneys, digestive tract, and brain.
Cultivation
It is cultivated as an ornamental plant for traditional and woodland shade gardens. Subspecies italicum (the one normally grown in horticulture) has distinctive pale veins on the leaves, whilst subspecies neglectum (known as late cuckoo pint) has faint pale veins, and the leaves may have dark spots. Nonetheless, intermediates between these two subspecies also occur, and their distinctiveness has been questioned. Some gardeners use this arum to underplant with Hosta, as they produce foliage sequentially: when the Hosta withers away, the arum replaces it in early winter, maintaining ground-cover. Numerous cultivars have been developed for garden use, of which A. italicum subsp. italicum 'Marmoratum' has gained the Royal Horticultural Society's Award of Garden Merit.
Gallery
References
External links
Missouri Botanical Garden - Kemper Center for Home Gardening - Arum italicum
Invasive Plant Atlas Italian arum - Arum italicum
italicum
Flora of Europe
Medicinal plants
Garden plants of Europe
Plant toxins
Neurotoxins
Plants described in 1768
Taxa named by Philip Miller | Arum italicum | [
"Chemistry"
] | 815 | [
"Neurochemistry",
"Neurotoxins",
"Chemical ecology",
"Plant toxins"
] |
7,149,681 | https://en.wikipedia.org/wiki/Generalized%20dihedral%20group | In mathematics, the generalized dihedral groups are a family of groups with algebraic structures similar to that of the dihedral groups. They include the finite dihedral groups, the infinite dihedral group, and the orthogonal group O(2). Dihedral groups play an important role in group theory, geometry, and chemistry.
Definition
For any abelian group H, the generalized dihedral group of H, written Dih(H), is the semidirect product of H and Z2, with Z2 acting on H by inverting elements. I.e., with φ(0) the identity and φ(1) inversion.
Thus we get:
(h1, 0) * (h2, t2) = (h1 + h2, t2)
(h1, 1) * (h2, t2) = (h1 − h2, 1 + t2)
for all h1, h2 in H and t2 in Z2.
(Writing Z2 multiplicatively, we have (h1, t1) * (h2, t2) = (h1 + t1h2, t1t2) .)
Note that (h, 0) * (0,1) = (h,1), i.e. first the inversion and then the operation in H. Also (0, 1) * (h, t) = (−h, 1 + t); indeed (0,1) inverts h, and toggles t between "normal" (0) and "inverted" (1) (this combined operation is its own inverse).
The subgroup of Dih(H) of elements (h, 0) is a normal subgroup of index 2, isomorphic to H, while the elements (h, 1) are all their own inverse.
The conjugacy classes are:
the sets {(h,0 ), (−h,0 )}
the sets {(h + k + k, 1) | k in H }
Thus for every subgroup M of H, the corresponding set of elements (m,0) is also a normal subgroup. We have:
Dih(H) / M = Dih ( H / M )
Examples
Dihn = Dih(Zn) (the dihedral groups)
For even n there are two sets {(h + k + k, 1) | k in H }, and each generates a normal subgroup of type Dihn / 2. As subgroups of the isometry group of the set of vertices of a regular n-gon they are different: the reflections in one subgroup all have two fixed points, while none in the other subgroup has (the rotations of both are the same). However, they are isomorphic as abstract groups.
For odd n there is only one set {(h + k + k, 1) | k in H }
Dih∞ = Dih(Z) (the infinite dihedral group); there are two sets {(h + k + k, 1) | k in H }, and each generates a normal subgroup of type Dih∞. As subgroups of the isometry group of Z they are different: the reflections in one subgroup all have a fixed point, the mirrors are at the integers, while none in the other subgroup has, the mirrors are in between (the translations of both are the same: by even numbers). However, they are isomorphic as abstract groups.
Dih(S1), or orthogonal group O(2,R), or O(2): the isometry group of a circle, or equivalently, the group of isometries in 2D that keep the origin fixed. The rotations form the circle group S1, or equivalently SO(2,R), also written SO(2), and R/Z ; it is also the multiplicative group of complex numbers of absolute value 1. In the latter case one of the reflections (generating the others) is complex conjugation. There are no proper normal subgroups with reflections. The discrete normal subgroups are cyclic groups of order n for all positive integers n. The quotient groups are isomorphic with the same group Dih(S1).
Dih(Rn ): the group of isometries of Rn consisting of all translations and inversion in all points; for n = 1 this is the Euclidean group E(1); for n > 1 the group Dih(Rn ) is a proper subgroup of E(n ), i.e. it does not contain all isometries.
H can be any subgroup of Rn, e.g. a discrete subgroup; in that case, if it extends in n directions it is a lattice.
Discrete subgroups of Dih(R2 ) which contain translations in one direction are of frieze group type and 22.
Discrete subgroups of Dih(R2 ) which contain translations in two directions are of wallpaper group type p1 and p2.
Discrete subgroups of Dih(R3 ) which contain translations in three directions are space groups of the triclinic crystal system.
Properties
Dih(H) is Abelian, with the semidirect product a direct product, if and only if all elements of H are their own inverse, i.e., an elementary abelian 2-group:
Dih(Z1) = Dih1 = Z2
Dih(Z2) = Dih2 = Z2 × Z2 (Klein four-group)
Dih(Dih2) = Dih2 × Z2 = Z2 × Z2 × Z2
etc.
Topology
Dih(Rn ) and its dihedral subgroups are disconnected topological groups. Dih(Rn ) consists of two connected components: the identity component isomorphic to Rn, and the component with the reflections. Similarly O(2) consists of two connected components: the identity component isomorphic to the circle group, and the component with the reflections.
For the group Dih∞ we can distinguish two cases:
Dih∞ as the isometry group of Z
Dih∞ as a 2-dimensional isometry group generated by a rotation by an irrational number of turns, and a reflection
Both topological groups are totally disconnected, but in the first case the (singleton) components are open, while in the second case they are not. Also, the first topological group is a closed subgroup of Dih(R) but the second is not a closed subgroup of O(2).
References
Group theory | Generalized dihedral group | [
"Mathematics"
] | 1,358 | [
"Group theory",
"Fields of abstract algebra"
] |
7,149,861 | https://en.wikipedia.org/wiki/Mixed-mode%20chromatography | Mixed-mode chromatography (MMC), or multimodal chromatography, refers to chromatographic methods that utilize more than one form of interaction between the stationary phase and analytes in order to achieve their separation. What is distinct from conventional single-mode chromatography is that the secondary interactions in MMC cannot be too weak, and thus they also contribute to the retention of the solutes.
History
Before MMC was considered as a chromatographic approach, secondary interactions were generally believed to be the main cause of peak tailing.
However, it was discovered afterwards that secondary interactions can be applied for improving separation power. In 1986, Regnier’s group synthesized a stationary phase that had characteristics of anion exchange chromatography (AEX) and hydrophobic interaction chromatography (HIC) on protein separation.
In 1998, a new form of MMC, hydrophobic charge induction chromatography (HCIC), was proposed by Burton and Harding.
In the same year, conjoint liquid chromatography (CLC), which combines different types of monolithic convective interaction media (CIM) disks in the same housing, was introduced by Štrancar et al.
In 1999, Yates’ group [11] loaded strong-cation exchange (SCX) and reversed phase liquid chromatography (RPLC) stationary phases sequentially into a capillary column coupled with tandem mass spectrometry (MS/MS) in the analysis of peptides, which became one of the most efficient technique in proteomics afterwards.
In 2009, Geng’s group first achieved online two-dimensional (2D) separation of intact proteins using a single column possessing separation features of weak-cation exchange chromatography (WCX) and HIC (termed as two-dimensional liquid chromatography using a single column, (2D-LC-1C).
Advantages
Higher selectivity: for example, positive, negative and neutral substances could be separated by a reversed phase (RP)/anion-cation exchange (ACE) column in a single run.
Higher loading capacity,
for example, loading capacity of ACE/ hydrophilic interaction chromatography (HILIC) increased 10-100 times when compared with RPLC,
which offered a new selection and idea for developing semi-preparative and preparative chromatography.
One mixed-mode column can replace two or even more single mode columns, which is economic and eco-friendly for employing the stationary phase more sufficiently and reducing the consuming and ‘waste’ of raw materials.
Single mixed-mode column can be applied for on-line two-dimensional (2D) analysis in a sealed system via establishing corresponding chromatographic system or off-line 2D analysis as two columns.
Classification of MMC
MMC can be classified into physical MMC and chemical MMC. In the former method, the stationary phase is constructed of two or more types of packing materials. In the chemical method, just one type of packing material containing two or more functionalities is used.
Physical methods
The simplest approach is to connect two commercial columns in series, which is termed a “tandem column”. Another approach is “biphasic column”, by packing two stationary phases separately in two ends of the same column. The third approach is to homogenize two or more different types of stationary phases in a single column, which is termed a “hybrid column” or “mixed-bed column”.
Chemical methods
IEC/HIC
Since IEC and HIC conditions are the closest ones to physiological conditions which are fit for maintaining biological activity, the combinations of them are widely used in the separation of biological products. IEC/HIC MMC has improved separation power and selectivity on the grounds that it applies both electrostatic and hydrophobic interactions.
IEC/RPLC
IEC/RP MMC combines the advantages of RPLC and IEC. For example, WAX/RP has increased separation power and degree of freedom in adjusting the separation selectivity when compared with single WAX or RPLC.
HILIC/RPLC
Liu et al. synthesized a HILIC/RP stationary phase which could show RPLC or HILIC retention by adjusting the organic phase in mobile phase.
HILIC/IEC
Mant et al. reported that HILIC/CEX offered unique selectivity, stronger separation power and wider range of applications compared to RPLC for peptide separations.
SEC/IEC
Hydrophobic interactions in protein SEC are relatively weak at low ionic strength, electrostatic effects may contribute significantly to retention, and this allows us to use an SEC column as a weak ion exchanger.
References
Chromatography | Mixed-mode chromatography | [
"Chemistry"
] | 973 | [
"Chromatography",
"Separation processes"
] |
7,149,912 | https://en.wikipedia.org/wiki/Epidemiology%20of%20chikungunya | Chikungunya is a mosquito-borne alpha virus that was first isolated after a 1952 outbreak in modern-day Tanzania. The virus has circulated in forested regions of sub-Saharan African in cycles involving nonhuman primate hosts and arboreal mosquito vectors. Phylogenetic studies indicate that the urban transmission cycle—the transmission of a pathogen between humans and mosquitoes that exist in urban environments—was established on multiple occasions from strains occurring on the eastern half of Africa in non-human primate hosts. This emergence and spread beyond Africa may have started as early as the 18th century. Currently, available data does not indicate whether the introduction of chikungunya into Asia occurred in the 19th century or more recently, but this epidemic Asian strain causes outbreaks in India and continues to circulate in Southeast Asia.
A number of chikungunya outbreaks have occurred since 2005. However, As of the latest data available, developed countries have yet to report a confirmed indigenous case of chikungunya. An analysis of the chikungunya virus's genetic code suggests that the increased severity of the 2005–present outbreak may be due to a change in the genetic sequence, altering the virus' viral coat protein, which potentially allows it to multiply more easily in mosquito cells. The change allows the virus to use the Asian tiger mosquito (an invasive species) as a vector in addition to the more strictly tropical main vector, Aedes aegypti. In July 2006, a team analyzed the virus' RNA and determined the genetic changes that have occurred in various strains of the virus and identified those genetic sequences which led to the increased virulence of recent strains. The virus, CHIKV, is a small, enveloped virus making it part of the alphavirus family Togaviridae. This characteristic improves the viruses ability to enter into the body and impact those most affected such as individuals over 65 years of age and individuals with underlying medical conditions. Individuals below the age of 30 are found to have a faster recovery time with the reasoning unknown at this time.
Outbreaks of chikungunya, on average, have low mortality rates. As it is generally a nonfatal disease, prevalence rates during most outbreaks are higher than incidence rates. Recently, it was discovered that approximately 39% of the worldwide population resides in environments where the chikungunya virus is endemic. The spikes of transmission have increased the worldwide fatal cases to 350 people per year as of October 2023 to 87 deaths in 2022. Few studies have thoroughly investigated the risks to those living in medically insufficient areas, but some surveys suggest higher rates of chronic effects. Challenges relating to staffing and financing in less-developed countries may contribute to the underreporting of cases. Current data on the co-morbidities of chikungunya infection states that individuals with severe cases of chikungunya have an increased prevalence of cardiac conditions along with diabetes and respiratory difficulties. With the exception of asthma, the risk of each concurrent condition with CHIKV infections increases with age. While the long term effects still need to be investigated, on average, 40% individuals with the multiple chikungunya virus infections experience persistent disabilities after 6 months and 28% of the people still had it after 18 months. Modern studies suggest a correlation between elevated CHIKV infections and risk factors including individuals that previously experienced joint-related pains and conditions, those aged 45 and above, and individuals of the female gender.
2005–06: Réunion
The largest outbreak of chikungunya ever recorded at the time occurred on the island of Réunion in the western rim of the Indian Ocean from late March 2005 to February 2006. At its height, the incidence peaked at about 25,000 cases per week or 3500 daily in early 2006. After an initial peak in May 2005, the incidence decreased and remained stable through the summer hemisphere winter, rising again at the beginning of October 2005. By mid-December, when southern hemisphere summer temperatures are favorable for the mosquito vector, the incidence began to rise dramatically into the first two months of 2006. The number of reported cases was thought to be underestimated. The case-fatality ratio for chikungunya fever during the outbreak was 1 in 1000. The French government sent several hundred troops to help eradicate mosquitoes. Although confirmed cases were much lower, some estimates based on extrapolations from the number detected by sentinel physicians suggested that as many as 110,000 of Réunion's population of 800,000 people may have been infected. Twelve cases of meningoencephalitis cases were confirmed to be associated with chikungunya infection. Other countries in the southwest Indian Ocean reported cases as well, including Mauritius and the Seychelles, and in Madagascar, the Comoros, and Mayotte.
2006: India
In 2006, there was a large outbreak in India. States affected by the outbreak were Andhra Pradesh, Andaman & Nicobar Islands, Tamil Nadu, Karnataka, Maharashtra, Gujarat, Madhya Pradesh, Kerala and Delhi. The initial cases were reported from Hyderabad and Secunderabad as well as from Anantpur district as early as November and December 2005 and is continue unabated. In Hyderabad alone an average practitioner saw anywhere between 10 and 20 cases every day. Some deaths have been reported but it was thought to be due mainly to the inappropriate use of antibiotics and anti-inflammatory tablets. The major cause of mortality is due to severe dehydration, electrolyte imbalance and loss of glycemic control. Recovery is the rule except for about 3 to 5% incidence of prolonged arthritis. As this virus can cause thrombocytopenia, injudicious use of these drugs can cause erosions in the gastric epithelium leading to exsanguinating upper GI bleed (due to thrombocytopenia). Also the use of steroids for the control of joint pains and inflammation is dangerous and completely unwarranted. On average there are around 5,300 cases being treated every day. This figure is only from public sector. The figures from the private sector combined would be much higher.
There have been reports of large scale outbreak of this virus in Southern India. At least 80,000 people in Gulbarga, Tumkur, Bidar, Raichur, Bellary, Chitradurga, Davanagere, Kolar and Bijapur districts in Karnataka state are known to have been affected since December 2005.
A separate outbreak of chikungunya fever was reported from Malegaon town in Nasik district, Maharashtra state, in the first two weeks of March 2006, resulting in over 2000 cases. In Orissa state, at most 5000 cases of fever with muscle aches and headache were reported between February 27 and March 5, 2006.
In Bangalore, the state capital of Karnataka (India), there seemed to be an outbreak of chikungunya in May 2006 with arthralgia/arthritis and rashes. As well as in the neighbouring state of Andhra Pradesh. In the 3rd week of May 2006 the outbreak of chikungunya in North Karnataka was severe. All the North Karnataka districts specially Gulbarga, Koppal, Bellary, Gadag, Dharwad were affected. The people of this region are hence requested to be alert. Stagnation of water which provides fertile breeding grounds for the vector (Aedes aegypti) should be avoided. The latest outbreak is in Tamil Nadu, India - 20,000 cases have been reported in June 2006. Earlier it was found spreading mostly in the outskirts of Bangalore, but now it has started spreading in the city also (Updated 30/06/2006). More than 300,000 people are affected in Karnataka as of July 2006.
Reported on 29/06/2006, Chennai—fresh cases of this disease has been reported in local hospitals. A heavy effect has been reflected in south TN districts like Kanyakumari and Tirunelveli. Residents of Chennai are warned against the painful disease.
June 2006—Andaman Islands (India) chikungunya cases had been registered virtually for the first time in the month of June 2006. In the beginning of the September cases have gone as much as in thousands. As reported in a local news magazine it has taken the state of epidemic in Andamans. Health authorities are doing their best to handle the situation. Relapsed cases have been noticed with severe pain and swelling in the lower limbs, vomiting and general weakness.
As of July 2006, nearly 50,000 people were affected in Salem, Tamil Nadu.
As of August 2006, nearly 100,000 people were infected in Tamil Nadu. Chennai, capital of Tamil Nadu is one of the worst affected.
On 24 August 2006, The Hindu newspaper reported that the Indian states of Tamil Nadu, Karnataka, Andhra Pradesh, Maharashtra, Madhya Pradesh, Gujarat and Kerala had reported 1.1 million (11 lakh) cases. The government's claim of no deaths is questioned.
2007: Italy
In September 2007, 130 cases were confirmed in the province of Ravenna, Northern Italy, in the contiguous towns of Castiglione di Cervia and Castiglione di Ravenna. One person died. The source of the outbreak was an Indian from Kerala, India.
2009: Thailand
By the end of September 2009, the Thai Ministry of Health reported more than 42,000 cases during the previous year in 50 provinces in the south of Thailand, including the popular tourist destination of Phuket. About 14 years had lapsed since the last appearance of the disease. In May 2009 the provincial hospital in Trang Province prematurely delivered a 2.7 kg (6 pounds) male baby from his chikungunya-infected mother in the hopes of preventing mother-foetus virus transmission. After a cesarean delivery, the physicians discovered that he had also been infected with the chikungunya virus, and put him under intensive care. The child died at six days from respiratory complications, possibly the only death from the outbreak, but the cause of death may not have been chikungunya since the child was delivered prematurely . The Thai physicians gave a preliminary presumption that chikungunya virus might be transmitted from a mother to her foetus.
2011–15 Pacific Islands
Outbreaks in the Pacific Islands began in New Caledonia in 2011 and have since occurred in a number of Pacific countries. Fully 1/2 of the entire population of French Polynesia has come down with chikungunya Asian genotype (130,000 cases with 14 dead), exploding from a month earlier with 35,000 cases in December 2014; the first ever case was in 2013.
2012: Cambodia
An outbreak occurred in Cambodia with at least 1500 confirmed cases. Provinces for which affection was confirmed were: Preah Vihear, Battambang, Kampong Thom, Kampong Chhnang, Kandal, Kampong Speu and Takeo.
2013–14: The Caribbean
In December 2013, it was confirmed that chikungunya was being locally transmitted in the Americas for the first time in the French Caribbean dependency of St. Martin, with 66 confirmed cases and suspected cases of around 181. It is the first time in the Americas that the disease has spread to humans from a population of infected mosquitoes.
By mid-January 2014, a number of cases had been confirmed in five countries: St. Martin, Saint Barthélemy, Martinique, Guadeloupe, and the British Virgin Islands. At the start of April, at least ten nations had reported cases. By the start of May, there were more than 4,100 probable cases, and 31,000 suspected cases spanning 14 countries, including French Guiana, the only non-island nation with at least one reported case. On May 1, the Caribbean Public Health Agency (CARPHA) declared a Caribbean-wide epidemic of the virus.
As of 21 January 2014, no cases had been reported in Puerto Rico. But by 15 July 2014, over 400 cases had been reported and health authorities believed the number of actual cases (i.e., including unreported cases) was much higher.
By November 2014 the Pan American Health Organization reported about 800,000 suspected chikungunya cases in the Caribbean alone.
2014: United States
On July 17, 2014, the first chikungunya case acquired in the United States was reported in Florida by the Centers for Disease Control and Prevention in a man who had not recently traveled outside the United States. Shortly after another case was reported of a person in Florida being infected by the virus, not having traveled outside the U.S.
These were the first two cases where the virus was passed directly by mosquitoes to persons on the U.S. mainland. Aside from the locally acquired infections, there were 484 other cases reported in the United States as of 5 August 2014.
As of 11 September 2014, the number of reported cases in Puerto Rico for the year was 1,636. By 28 October, that number had increased to 2,974 confirmed cases with over 10,000 cases suspected.
2014: Venezuela
In September 2014, the Central University of Venezuela stated that there could be between 65,000 and 117,000 Venezuelans infected with chikungunya. Health Minister Nancy Pérez stated that only 400 Venezuelans were infected with chikungunya
2014: France
On October 20, 2014, 11 locally acquired cases of chikungunya were reported in Montpellier, Languedoc-Roussillon, in the South of France. 449 imported cases of chikungunya were also reported throughout France during the period May–November 2014.
2014: Costa Rica
As of December 2014, Costa Rica had 47 reported cases of chikungunya, 40 of which originated abroad, while 7 were locally acquired.
2014: Brazil
In June 2014 six cases of the virus were confirmed in Brazil, two in the city of Campinas in the state of São Paulo. The six cases are Brazilian army soldiers who had recently returned from Haiti, where they were participating in the reconstruction efforts as members of the United Nations Stabilisation Mission in Haiti. The information was officially released by Campinas municipality, which considers that it has taken the appropriate actions.
2014: El Salvador
On 25 September 2014, official authorities in El Salvador report over 30,000 confirmed cases of this new epidemic.
2014: Mexico
On 7 November 2014 Mexico reported an outbreak of chikungunya, acquired by local transmission, in southern state of Chiapas. The outbreak extends across the coastline from the Guatemala border to the neighbouring state of Oaxaca. Health authorities have reported a cumulative load of 39 laboratory-confirmed cases (by the end of week 48). No suspect cases have been reported.
2014–2015: Colombia
The first cases were officially confirmed in July 2014. Between that month and the end of 2014, as reported by the Colombian Health Institute (Instituto Nacional de Salud - INS ), there were 82,977 clinically confirmed cases and 611 cases confirmed through laboratory tests, bringing the total of confirmed cases during 2014 in Colombia to 83,588, 7 of which led to deaths. These cases were reported in the following regions: Amazonas, Atlántico, Arauca, Antioquia, Barranquilla, Bolívar, Boyacá, Caldas, Cartagena, Casanare, Cauca, Cesar, Córdoba, Cundinamarca, Huila, La Guajira, Magdalena, Meta, Putumayo, Nariño, Norte de Santander, Sucre, Santander, Santa Marta, Risaralda, Tolima, San Andrés and Valle del Cauca. According to news outlets, as of January 2015 at least one major city (Medellín) has issued sanitary alerts due to the expanding epidemic. By January 2015 the epidemic is considered to be in the initial expansion phase and it is expected by the Colombian National Health Institute (Instituto Nacional de Salud - INS) that the total number of cases will reach around 700,000 by the end of 2015 due to the in-country massive travel of tourists to and from regions where cases of the disease have been confirmed and the vector A. aegypti is indigenous. It is expected that the disease will become endemic and sustain itself, with a pattern of outbreaks similar to dengue fever, due to the fact that both vector and natural reservoirs are indigenous in large areas of the country.
On 24 September 2015, the Ministry of Health and Social Protection of Colombia officially declared the country free of chikungunya. There were 441,000 reported cases but the government estimated the infected to reach the 873,000.
2019: Republic of the Congo
The earliest case was reported on 7 January 2019 in Diosso, Republic of the Congo, and an outbreak was declared by the government on 9 February. By 14 April, 6,149 suspected cases had been reported, with Kouilou Department worst affected (47% of cases); suspected cases have also been reported in the Bouenza, Brazzaville, Lékoumou, Niari, Plateaux, Pointe-Noire and Pool departments. There have been no deaths reported.
Notes
Epidemiology
Health disasters in India | Epidemiology of chikungunya | [
"Environmental_science"
] | 3,532 | [
"Epidemiology",
"Environmental social science"
] |
7,150,276 | https://en.wikipedia.org/wiki/Hydrophilic%20interaction%20chromatography | Hydrophilic interaction chromatography (or hydrophilic interaction liquid chromatography, HILIC) is a variant of normal phase liquid chromatography that partly overlaps with other chromatographic applications such as ion chromatography and reversed phase liquid chromatography. HILIC uses hydrophilic stationary phases with reversed-phase type eluents. The name was suggested by Andrew Alpert in his 1990 paper on the subject. He described the chromatographic mechanism for it as liquid-liquid partition chromatography where analytes elute in order of increasing polarity, a conclusion supported by a review and re-evaluation of published data.
Surface
Any polar chromatographic surface can be used for HILIC separations. Even non-polar bonded silicas have been used with extremely high organic solvent composition, thanks to the exposed patches of silica in between the bonded ligands on the support, which can affect the interactions. With that exception, HILIC phases can be grouped into five categories of neutral polar or ionic surfaces:
simple unbonded silica silanol or diol bonded phases
amino or anionic bonded phases
amide bonded phases
cationic bonded phases
zwitterionic bonded phases
Mobile phase
A typical mobile phase for HILIC chromatography includes acetonitrile ("MeCN", also designated as "ACN") with a small amount of water. However, any aprotic solvent miscible with water (e.g. THF or dioxane) can be used. Alcohols can also be used, however, their concentration must be higher to achieve the same degree of retention for an analyte relative to an aprotic solvent–water combination. See also Aqueous normal phase chromatography.
It is commonly believed that in HILIC, the mobile phase forms a water-rich layer on the surface of the polar stationary phase vs. the water-deficient mobile phase, creating a liquid/liquid extraction system. The analyte is distributed between these two layers. However, HILIC is more than just simple partitioning and includes hydrogen donor interactions between neutral polar species as well as weak electrostatic mechanisms under the high organic solvent conditions used for retention. This distinguishes HILIC as a mechanism distinct from ion exchange chromatography. The more polar compounds will have a stronger interaction with the stationary aqueous layer than the less polar compounds. Thus, a separation based on a compound's polarity and degree of solvation takes place.
Additives
Ionic additives, such as ammonium acetate and ammonium formate, are usually used to control the mobile phase pH and ion strength. In HILIC they can also contribute to the polarity of the analyte, resulting in differential changes in retention. For extremely polar analytes (e.g. aminoglycoside antibiotics (gentamicin) or adenosine triphosphate), higher concentrations of buffer (c. 100 mM) are required to ensure that the analyte will be in a single ionic form. Otherwise, asymmetric peak shape, chromatographic tailing, and/or poor recovery from the stationary phase will be observed. For the separation of neutral polar analytes (e.g. carbohydrates), no buffer is necessary.
Other salts, such as 100–300 mM sodium perchlorate, that are soluble in high-organic solvent mixtures (c. 70–90% acetonitrile), can be used to increase the mobile phase polarity to affect elution These salts are not volatile, so this technique is less useful with a mass spectrometer as the detector. Usually a gradient (to increasing amounts of water) is enough to promote elution.
All ions partition into the stationary phase to some degree, so an occasional "wash" with water is required to ensure a reproducible stationary phase.
Applications
The HILIC mode of separation is used extensively for separation of some biomolecules, organic and some inorganic molecules by differences in polarity. Its utility has increased due to the simplified sample preparation for biological samples, when analyzing for metabolites, since the metabolic process generally results in the addition of polar groups to enhance elimination from the cellular tissue. This separation technique is also particularly suitable for glycosylation analysis and quality assurance of glycoproteins and glycoforms in biologic medical products. For the detection of polar compounds with the use of electrospray-ionization mass spectrometry as a chromatographic detector, HILIC can offer a ten fold increase in sensitivity over reversed-phase chromatography because the organic solvent is much more volatile.
Choice of pH
With surface chemistries that are weakly ionic, the choice of pH can affect the ionic nature of the column chemistry. Properly adjusted, the pH can be set to reduce the selectivity toward functional groups with the same charge as the column, or enhance it for oppositely charged functional groups. Similarly, the choice of pH affects the polarity of the solutes. However, for column surface chemistries that are strongly ionic, and thus resistant to pH values in the mid-range of the pH scale (pH 3.5–8.5), these separations will be reflective of the polarity of the analytes alone, and thus might be easier to understand when doing methods development.
ERLIC
In 2008, Alpert coined the term, ERLIC (electrostatic repulsion hydrophilic interaction chromatography), for HILIC separations where an ionic column surface chemistry is used to repel a common ionic polar group on an analyte or within a set of analytes, to facilitate separation by the remaining polar groups. Electrostatic effects have an order of magnitude stronger chemical potential than neutral polar effects. This allows one to minimize the influence of a common, ionic group within a set of analyte molecules; or to reduce the degree of retention from these more polar functional groups, even enabling isocratic separations in lieu of a gradient in some situations. His subsequent publication further described orientation effects which others have also called ion-pair normal phase or e-HILIC, reflecting retention mechanisms sensitive to a particular ionic portion of the analyte, either attractive or repulsive. ERLIC (eHILIC) separations need not be isocratic, but the net effect is the reduction of the attraction of a particularly strong polar group, which then requires less strong elution conditions, and the enhanced interaction of the remaining polar (opposite charged ionic, or non-ionic) functional groups of the analyte(s).Based on the ERLIC column invented by Andrew Alpert, a new peptide mapping methodology was developed with unique properties of separation of asparagine deamidation and isomerization. This unique properties would be very beneficial for future mass spectrometry based multi-attributes monitoring in biologics quality control.
Cationic eHILIC
For example, one could use a cation exchange (negatively charged) surface chemistry for ERLIC separations to reduce the influence on retention of anionic (negatively charged) groups (the phosphates of nucleotides or of phosphonyl antibiotic mixtures; or sialic acid groups of modified carbohydrates) to now allow separation based more on the basic and/or neutral functional groups of these molecules. Modifying the polarity of a weakly ionic group (e.g. carboxyl) on the surface is easily accomplished by adjusting the pH to be within two pH units of that group's pKa. For strongly ionic functional groups of the surface (i.e. sulfates or phosphates) one could instead use a lower amount of buffer so the residual charge is not completely ion paired. An example of this would be the use of a 12.5mM (rather than the recommended >20mM buffer), pH 9.2 mobile phase on a polymeric, zwitterionic, betaine-sulfonate surface to separate phosphonyl antibiotic mixtures (each containing a phosphate group). This enhances the influence of the column's sulfonic acid functional groups of its surface chemistry over its, slightly diminished (by pH), quaternary amine. Commensurate with this, these analytes will show a reduced retention on the column eluting earlier, and in higher amounts of organic solvent, than if a neutral polar HILIC surface were used. This also increases their detection sensitivity by negative ion mass spectrometry.
Anionic eHILIC
By analogy to the above, one can use an anion exchange (positively charged) column surface chemistry to reduce the influence on retention of cationic (positively charged) functional groups for a set of analytes, such as when selectively isolating phosphorylated peptides or sulfated polysaccharide molecules. Use of a pH between 1 and 2 pH units will reduce the polarity of two of the three ionizable oxygens of the phosphate group, and thus will allow easy desorption from the (oppositely charged) surface chemistry. It will also reduce the influence of negatively charged carboxyls in the analytes, since they will be protonated at this low a pH value, and thus contribute less overall polarity to the molecule. Any common, positively charged amino groups will be repelled from the column surface chemistry and thus these conditions enhance the role of the phosphate's polarity (as well as other neutral polar groups) in the separation.
References
Chromatography
Laboratory techniques
Molecular biology
Biochemistry methods | Hydrophilic interaction chromatography | [
"Chemistry",
"Biology"
] | 1,982 | [
"Chromatography",
"Biochemistry methods",
"Separation processes",
"nan",
"Molecular biology",
"Biochemistry"
] |
7,150,540 | https://en.wikipedia.org/wiki/HASTAC | HASTAC (/ˈhāˌstak/), also known as the Humanities, Arts, Science and Technology Alliance and Collaboratory, is a virtual organization and platform comprising over 18,000 individuals and more than 400 affiliate institutions. Members of the HASTAC network actively contribute to the community through an open-access website, by organizing and participating in HASTAC conferences and workshops, and by collaborating with fellow network members.
Until 2016, HASTAC managed the annual $2 million MacArthur Foundation Digital Media and Learning Competition. The 2011 competition, titled “Badges for Lifelong Learning,” was launched in collaboration with the Mozilla Foundation and focused on the use of digital badges to motivate learning, recognize achievement, and validate the acquisition of knowledge or skills.
HASTAC has received funding from various institutions. As of 2021, HASTAC is jointly administered and funded by The Graduate Center, CUNY and Dartmouth College.
Founding
HASTAC was founded in 2002 by Cathy N. Davidson, Ruth F. DeVarney Professor of English, John Hope Franklin Humanities Institute Professor of Interdisciplinary Studies and co-director of the PhD Lab in Digital Knowledge at Duke University and co-founder of the John Hope Franklin Humanities Institute at Duke University, and David Theo Goldberg, Director of the University of California's statewide Humanities Research Institute (UCHRI).
At a meeting of humanities leaders held by the Mellon Foundation in 2002, it was noted that Davidson and Goldberg had each, independently, been working on a variety of projects with scientists and engineers dedicated to expanding the uses of technology in research, teaching, and electronic publishing. They resolved to contact others who were building and analyzing the social and ethical dimensions of new technologies and soon formed the HASTAC.
Currently, HASTAC is governed by a Steering Committee of individuals from different institutions and disciplines.
Programs
HASTAC Scholars program
In 2008, HASTAC initiated the HASTAC Scholars Program, an annual fellowship program that recognizes graduate and undergraduate students engaged in work across the areas of technology, the arts, the humanities, and the social sciences. As of 2021, over 1,800 people from 260 institutions have been named HASTAC Scholars.
HASTAC/MacArthur Foundation Digital Media and Learning Competition
Created in 2007, the HASTAC/MacArthur Foundation Digital Media and Learning Competition is designed to find and inspire the most uses of new media in support of connected learning. Awards have recognized individuals, for-profit companies, universities, and community organizations using new media to transform learning. Information about Digital Media and Learning Competition winners can be found on HASTAC.
Digital Publication Projects
Digital Publication Projects: Michigan Series in Digital Humanities@digitalculturebooks and the UM/HASTAC Digital Humanities Publication Prize
The University of Michigan Press and HASTAC launched The University of Michigan Series in Digital Humanities@digitalculturebooks and the UM/HASTAC Digital Humanities Publication Prize in December 2009. Series editors include Julie Thompson Klein and Tara McPherson; advisory board includes Cathy N. Davidson, Daniel Herwitz, and Wendy Chun (Brown).
Initial 2012 winners were Jentery Sayers and Sheila Brennan.
Events
Conferences
HASTAC member organizations organize international conferences.
Eight HASTAC member institutions coordinated InFORMATION Year in 2006–2007, a year of webcast programming each focusing on a theme for one month (InCommon, InCommunity, Interplay, Interaction, InJustice, Integration, Invitation and Innovation). All culminated in Electronic Techtonics an international Interface conference co-hosted by Duke University and RENCI (the Renaissance Computing Institute) on April 19–21, 2007. All events were webcast and archived versions are available free on hastac.org for nonprofit educational purposes.
HASTAC II: Techno-Travels was held in 2008 at the University of California Humanities Research Institute (UCHRI); University of California, Irvine; and the University of California, Los Angeles.
HASTAC III: Traversing Digital Boundaries was organized by the Institute for Computing in Humanities, Arts, and Social Science (I-CHASS) at the University of Illinois at Urbana-Champaign.
HASTAC 2010: Grand Challenges and Global Innovations hosted by I-CHASS was a free, entirely virtual event held in a multiplicity of digital spaces instigated from sites across the globe.
HASTAC 2011: Digital Scholarly Communication was held at the University of Michigan at Ann Arbor. (digital proceedings)
HASTAC 2013: The Storm of Progress was held at York University and in downtown Toronto, Canada on April 25–28, 2013.
HASTAC 2014: Hemispheric Pathways - Critical Makers in International Networks was hosted by the Peruvian Ministry of Culture held in Lima, Peru.
HASTAC 2015: Exploring the Art & Science of Digital Humanities was held at the Kellogg Center of Michigan State University.
HASTAC 2016: Impact, Variation, Innovation, Action was held at the Arizona State University's Nexus Lab in Tempe, AZ.
HASTAC 2017: The Possible Worlds of Digital Humanities was held in Orlando, Florida. Sponsored and organized by the Florida Digital Humanities Consortium (FLDH.org)
HASTAC 2019: Decolonizing Technologies, Reprogramming Education was held in partnership with the Institute for Critical Indigenous Studies at the University of British Columbia and the Department of English at the University of Victoria.
HASTAC 2020: "Hindsight, Foresight, Insight" was planned to be led by Dean Anne Balsamo, and held at the University of Texas in Dallas, but due to COVID-19, has been cancelled.
HASTAC 2023: Critical Making & Social Justice is scheduled to take place at Pratt Institute on June 8–10, 2023.
Mozilla's Drumbeat Festival: Learning, Freedom and the Open Web
HASTAC hosted the "Storming the Academy" tent, which discussed and workshopped open learning and peer-to-peer assessment strategies, ideas, and lessons, at the Mozilla Drumbeat Festival in Barcelona on Nov. 3–5, 2010.
THATCampRTP
On October 16, 2010, HASTAC hosted and helped to organize THATCamp RTP at Duke University's John Hope Franklin Humanities Institute. It was the first area THATCamp for the Research Triangle Park area of North Carolina.
References
Information technology organizations
Digital humanities
Humanities organizations
Computing and society | HASTAC | [
"Technology"
] | 1,282 | [
"Digital humanities",
"Information technology",
"Computing and society",
"Information technology organizations"
] |
7,151,375 | https://en.wikipedia.org/wiki/Space-oblique%20Mercator%20projection | Space-oblique Mercator projection is a map projection devised in the 1970s for preparing maps from Earth-survey satellite data. It is a generalization of the oblique Mercator projection that incorporates the time evolution of a given satellite ground track to optimize its representation on the map. The oblique Mercator projection, on the other hand, optimizes for a given geodesic.
History
The space-oblique Mercator projection (SOM) was developed by John P. Snyder, Alden Partridge Colvocoresses and John L. Junkins in 1976. Snyder had an interest in maps dating back to his childhood; he regularly attended cartography conferences whilst on vacation. In 1972, the United States Geological Survey (USGS) needed to develop a system for reducing the amount of distortion caused when satellite pictures of the ellipsoidal Earth were printed on a flat page. Colvocoresses, the head of the USGS's national mapping program, asked attendees of a geodetic sciences conferences for help solving the projection problem in 1976. Snyder work on the problem with his newly purchased pocket calculator and devised the mathematical formulas needed to solve the problem. After submitting his calculations to Waldo Tobler for review, Snyder submitted these to the USGS at no charge. Impressed with his work, USGS officials offered Snyder a job, and he promptly accepted. His formulas were then used to produce maps from Landsat 4, which launched in the summer of 1978 .
Projection description
The space-oblique Mercator projection provides continual, nearly conformal mapping of the swath sensed by a satellite. Scale is true along the ground track, varying 0.01 percent within the normal sensing range of the satellite. Conformality is correct within a few parts per million for the sensing range. Distortion is essentially constant along lines of constant distance parallel to the ground track. The space-oblique Mercator is the only projection which takes the rotation of Earth into account.
Equations
The forward equations for the Space-oblique Mercator projection for the sphere are as follows:
References
John Hessler, Projecting Time: John Parr Snyder and the Development of the Space Oblique Mercator Projection, Library of Congress, 2003
Snyder's 1981 Paper Detailing the Projection's Derivation
Map projections | Space-oblique Mercator projection | [
"Mathematics"
] | 465 | [
"Map projections",
"Coordinate systems"
] |
13,279,719 | https://en.wikipedia.org/wiki/List%20of%20widget%20toolkits | This article provides a list of widget toolkits (also known as GUI frameworks), used to construct the graphical user interface (GUI) of programs, organized by their relationships with various operating systems.
Low-level widget toolkits
Integrated in the operating system
Mac OS X uses Cocoa. Mac OS 9 and Mac OS X used to use Carbon for 32-bit applications.
The Windows API used in Microsoft Windows. Microsoft had the graphics functions integrated in the kernel until 2006
The Haiku operating system uses an extended and modernised version of the Be API that was used by its predecessor BeOS. Haiku is expected to drop binary and source compatibility with BeOS at some future time, which will result in a Haiku API.
As a separate layer on top of the operating system
The X Window System contains primitive building blocks, called Xt or "Intrinsics", but they are mostly only used by older toolkits such as: OLIT, Motif and Xaw. Most contemporary toolkits, such as GTK or Qt, bypass them and use Xlib or XCB directly.
The Amiga OS Intuition was formerly present in the Amiga Kickstart ROM and integrated itself with a medium-high level widget library which invoked the Workbench Amiga native GUI. Since Amiga OS 2.0, Intuition.library became disk based and object oriented. Also Workbench.library and Icon.library became disk based, and could be replaced with similar third-party solutions.
Since 2005, Microsoft has taken the graphics system out of Windows' kernel.
High-level widget toolkits
OS dependent
On Amiga
BOOPSI (Basic Object Oriented Programming System for Intuition) was introduced with OS 2.0 and enhanced Intuition with a system of classes in which every class represents a single widget or describes an interface event. This led to an evolution in which third-party developers each realised their own personal systems of classes.
MUI: object-oriented GUI toolkit and the official toolkit for MorphOS.
ReAction: object-oriented GUI toolkit and the official toolkit for AmigaOS.
Zune (GUI toolkit) is an open source clone of MUI and the official toolkit for AROS.
On macOS
Cocoa - used in macOS (see also Aqua). As a result of macOS' OPENSTEP lineage, Cocoa also supports Windows, although it is not publicly advertised as such. It is generally unavailable for use by third-party developers. An outdated and feature-limited open-source subset of Cocoa exists within the WebKit project, however; it is used to render Aqua natively in Safari (web browser) for Windows. Apple's iTunes, which supports both GDI and WPF, includes a mostly complete binary version of the framework as "Apple Application Support".
Carbon - the deprecated framework used in Mac OS X to port “classic” Mac applications and software to the Mac OS X.
MacApp, the framework for the Classic Mac OS by Apple.
PowerPlant, the framework for the Classic Mac OS by Metrowerks.
On Microsoft Windows
The Microsoft Foundation Classes (MFC), a C++ wrapper around the Windows API.
The Windows Template Library (WTL), a template-based extension to ATL and a replacement of MFC
The Object Windows Library (OWL), Borland's alternative to MFC.
The Visual Component Library (VCL) is Embarcadero's toolkit used in C++Builder and Delphi. It wraps the native Windows controls, providing object-oriented classes and visual design, although also allowing access to the underlying handles and other WinAPI details if required. It was originally implemented as a successor to OWL, skipping the OWL/MFC style of UI creation, which by the mid-nineties was a dated design model.
Windows Forms (WinForms) is Microsoft's .NET set of classes that handle GUI controls. In the cross-platform Mono implementation, it is an independent toolkit, implemented entirely in managed code (not wrapping the Windows API, which doesn't exist on other platforms). WinForms' design closely mimics that of the VCL.
The Windows Presentation Foundation (WPF) is the graphical subsystem of the .NET Framework 3.0. User interfaces can be created in WPF using any of the CLR languages (e.g. C#) or with the XML-based language XAML. Microsoft Expression Blend is a visual GUI builder for WPF.
The Windows UI Library (WinUI) is the graphical subsystem of universal apps. User interfaces can be created in WinUI using C++ or any of the .NET languages (e.g., C#) or with the XML-based language XAML. Microsoft Expression Blend is a visual GUI builder that supports WinUI.
On Unix, under the X Window System
Note that the X Window System was originally primarily for Unix-like operating systems, but it now runs on Microsoft Windows as well using, for example, Cygwin, so some or all of these toolkits can also be used under Windows.
Motif used in the Common Desktop Environment.
LessTif, an open source (LGPL) implementation of Motif.
MoOLIT, a bridge between the look-and-feel of OPEN LOOK and Motif
OLIT, an Xt-based OPEN LOOK intrinsics toolkit
Xaw, the Project Athena widget set for the X Window System.
XView, a SunView compatible OPEN LOOK toolkit
Cross-platform
Based on C (including bindings to other languages)
Elementary, open source (LGPL), a part of the Enlightenment Foundation Libraries.
GTK, open source (LGPL), primarily for the X Window System, ported to and emulated under other platforms; used in the GNOME, Rox, LXDE and Xfce desktop environments. The Windows port has support for native widgets.
IUP, open source (MIT), a minimalist GUI toolkit in ANSI C for Windows, UNIX and Linux.
Tk, open source (BSD-style), a widget set accessed from Tcl and other high-level script languages (interfaced in Python as Tkinter).
XForms, the Forms Library for X
XVT, Extensible Virtual Toolkit
Based on C++ (including bindings to other languages)
CEGUI, open source (MIT License), cross-platform widget toolkit designed for game development, but also usable for applications and tool development. Supports multiple renderers and optional libraries.
FLTK, open source (LGPL), cross-platform toolkit designed to be small and fast.
FOX toolkit, open source (LGPL), cross-platform toolkit.
GLUI, a very small toolkit written with the GLUT library.
gtkmm, C++ interface for GTK
Juce provides GUI and widget set with the same look and feel in Microsoft Windows, X Windows Systems, macOS and Android. Rendering can be based on OpenGL.
Qt, proprietary and open source (GPL, LGPL) available under Unix and Linux (with X11 or Wayland), Windows (Desktop, CE and Phone 8), macOS, iOS, Android, BlackBerry 10 and embedded Linux; used in the KDE, Trinity, LXQt, and Lumina desktop environment, it's also used in Ubuntu's Unity shell.
Rogue Wave Views (formerly ILOG Views) provides GUI and graphic library for Windows and the main X11 platforms.
TnFOX, open source (LGPL), a portability toolkit.
U++ is an Open-source application framework bundled with an IDE (BSD license), mainly created for Win32 and Unix-like operating system (X11) but now works with almost any operating systems.
wxWidgets (formerly wxWindows), open source (relaxed LGPL), abstract toolkits across several platforms for C++, Python, Perl, Ruby and Haskell.
Zinc Application Framework, cross-platform widget toolkit.
Based on Python
Tkinter, open source (BSD) is a Python binding to the Tk GUI toolkit. Tkinter is included with standard GNU/Linux, Microsoft Windows and macOS installs of Python.
Kivy, open source (MIT) is a modern library for rapid development of applications that make use of innovative user interfaces, such as multi-touch apps. Fully written in Python with additional speed ups in Cython.
PySide, open source (LGPL) is a Python binding of the cross-platform GUI toolkit Qt developed by The Qt Company, as part of the Qt for Python project.
PyQt, open source (GPL and commercial) is another Python binding of the cross-platform GUI toolkit Qt developed by Riverbank Computing.
PyGTK, open source (LGPL) is a set of Python wrappers for the GTK graphical user interface library.
wxPython, open source (wxWindows License) is a wrapper for the cross-platform GUI API wxWidgets for the Python programming language.
Pyjs, open source (Apache License 2.0) is a rich web application framework for developing client-side web and desktop applications, it is a port of Google Web Toolkit (GWT) from Java.
Based on Flash
Adobe Flash allows creating widgets running in most web browsers and in several mobile phones.
Adobe Flex provides high-level widgets for building web user interfaces. Flash widgets can be used in Flex.
Flash and Flex widgets will run without a web browser in the Adobe AIR runtime environment.
Based on Go
Fyne, open source (BSD) is inspired by the principles of Material Design to create applications that look and behave consistently across Windows, macOS, Linux, BSD, Android and iOS.
Based on XML
GladeXML with GTK
XAML with Silverlight or Moonlight
XUL
Based on JavaScript
General
jQuery UI
MooTools
Qooxdoo Could be understood as Qt for the Web
Script.aculo.us
RIAs
Adobe AIR
Dojo Toolkit
Sencha (formerly Ext JS)
Telerik Kendo UI
Webix
WinJS
React
Full-stack framework
Echo3
SproutCore
Telerik UI for ASP/PHP/JSP/Silverlight
Vaadin - Java
ZK - A Java Web framework for building rich Ajax and mobile applications
Resource-based
Google Web Toolkit (GWT)
Pyjs
FBML Facebook Markup Language
No longer developed
YUI (Yahoo! User Interface Library)
Based on SVG
Raphaël is a JavaScript toolkit for SVG interfaces and animations
Based on C#
Gtk#, C# wrappers around the underlying GTK and GNOME libraries, written in C and available on Linux, MacOS and Windows.
QtSharp, C# wrappers around the Qt widget toolkit, which is itself based-on the C++ language.
Windows Forms. There is an original Microsoft's implementation that is a wrapper around the Windows API and runs on windows, and Mono's alternative implementation that is cross platform.
Based on Java
The Abstract Window Toolkit (AWT) is Sun Microsystems' original widget toolkit for Java applications. It typically uses another toolkit on each platform on which it runs.
Swing is a richer widget toolkit supported since J2SE 1.2 as a replacement for AWT widgets. Swing is a lightweight toolkit, meaning it does not rely on native widgets.
Apache Pivot is an open-source platform for building rich web applications in Java or any JVM-compatible language, and relies on the WTK widget toolkit.
JavaFX and FXML.
The Standard Widget Toolkit (SWT) is a native widget toolkit for Java that was developed as part of the Eclipse project. SWT uses a standard toolkit for the running platform (such as the Windows API, macOS Cocoa, or GTK) underneath.
Codename One originally designed as a cross platform mobile toolkit it later expanded to support desktop applications both through JavaSE and via a JavaScript pipeline through browsers
java-gnome provides bindings to the GTK toolkit and other libraries of the GNOME desktop environment
Qt Jambi, the official Java binding to Qt from Trolltech. The commercial support and development has stopped
Based on Object Pascal
FireMonkey or FMX is a cross-platform widget and graphics library distributed with Delphi and C++Builder since version XE2 in 2011. It has bindings for C++ through C++Builder, and supports Windows, macOS, iOS, Android, and most recently Linux. FireMonkey supports platform-native widgets, such as a native edit control, and custom widgets that are styled to look native on a target operating system. Its graphics are GPU-accelerated and it supports styling, and mixing its own implementation controls with native system controls, which lets apps use native behaviour where it's important (for example, for IME text input.)
IP Pascal uses a graphics library built on top of standard language constructs. Also unusual for being a procedural toolkit that is cross-platform (no callbacks or other tricks), and is completely upward compatible with standard serial input and output paradigms. Completely standard programs with serial output can be run and extended with graphical constructs.
Lazarus LCL (for Pascal, Object Pascal and Delphi via Free Pascal compiler), a class library wrapping GTK+ 1.2–2.x, and the Windows API (Carbon, Windows CE and Qt4 support are all in development).
fpGUI is created with the Free Pascal compiler. It doesn't rely on any large 3rdParty libraries and currently runs on Linux, Windows, Windows CE, and Mac (via X11). A Carbon (macOS) port is underway.
CLX (Component Library for Cross-platform) was used with Borland's (now Embarcadero's) Delphi, C++ Builder, and Kylix, for producing cross-platform applications between Windows and Linux. It was based on Qt, wrapped in such a way that its programming interface was similar to that of the VCL toolkit. It is no longer maintained and distributed, and has been replaced with FireMonkey, a newer toolkit also supporting more platforms, since 2011.
Based on Objective-C
GNUstep and OpenStep
Cocoa and Cocoa Touch
Based on Dart
Flutter (software) is an open-source and cross platform framework created by Google.
Based on Swift
Cocoa Touch is a framework created by Apple to build applications for iOS, iPadOS and tvOS.
Based on Ruby
Shoes (GUI toolkit) is a cross platform framework for graphical user interface development.
Not yet categorised
WINGs
LiveCode
Wt
Immediate Mode GUI
Comparison of widget toolkits
See also
List of platform-independent GUI libraries
References
Widget toolkits | List of widget toolkits | [
"Technology"
] | 3,137 | [
"Computing-related lists",
"Lists of software"
] |
13,279,771 | https://en.wikipedia.org/wiki/Modeling%20%28psychology%29 | Modeling is:
a method used in certain cognitive-behavioral techniques of psychotherapy whereby the client learns by imitation alone, copying a human model without any specific verbal direction by the therapist, and
a general process in which persons serve as models for others, exhibiting the behavior to be imitated by others. This process is most commonly discussed for children.
Study by Albert Bandura
Albert Bandura most memorably introduced the concept of behavioral modeling in his famous 1961 Bobo doll experiment. In this study, 72 children from ages three to five were divided into groups to watch an adult confederate (the model) interact with an assortment of toys in the experiment room, including an inflated Bobo doll. For children assigned the non-aggressive condition, the role model ignored the doll. For children assigned the aggressive condition, the role model spent the majority of the time physically aggressing the doll and shouting at it.
After the role model left the room, the children were allowed to interact with similar toys individually. Children who observed the non-aggressive role model's behavior played quietly with the toys and rarely initiated violence toward the Bobo doll. Children who watched the aggressive role model were more likely to model themselves on that example by hitting, kicking, and shouting at the Bobo doll.
Factors influencing behavioral modeling
Psychological factors
Bandura proposed that four components contribute to behavioral modeling.
Attention: The observer must watch and pay attention to the behavior being modeled.
Retention: The observer must remember the behavior well enough to recreate it.
Reproduction: The observer must physically recreate the actions they observed in step 1.
Reinforcement: The observer's modeled behavior must be rewarded
Neurological factors
The mirror neuron system, located in the brain's frontal lobe, is a network of neurons that become active when an animal performs a behavior or observes that behavior being performed by another. For example, mirror neurons become active when a monkey grasps an object, just as when it watches another monkey do. While the significance of mirror neurons is still up for debate in the scientific community, many believe them to be the primary biological component in imitative learning.
In neuro-linguistic programming
Modeling is an important component of neuro-linguistic programming (NLP), a field in which specialized modeling techniques are developed.
See also
Cognitive imitation
Mimicry
Mirror neuron
Social cognition
References
Behavioral concepts | Modeling (psychology) | [
"Biology"
] | 467 | [
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
13,280,188 | https://en.wikipedia.org/wiki/20%2C000 | 20,000 (twenty thousand) is the natural number that comes after 19,999 and before 20,001.
20,000 is a round number and is also in the title of Jules Verne's 1870 novel Twenty Thousand Leagues Under the Seas.
Selected numbers in the range 20001–29999
20001 to 20999
20002 = number of surface-points of a tetrahedron with edge-length 100
20067 = The smallest number with no entry in the Online Encyclopedia of Integer Sequences (OEIS)
20100 = sum of the first 200 natural numbers (hence a triangular number)
20160 = 23rd highly composite number; the smallest order belonging to two non-isomorphic simple groups: the alternating group A8 and the Chevalley group A2(4)
20161 = the largest integer that cannot be expressed as a sum of two abundant numbers
20230 = pentagonal pyramidal number
20412 = Leyland number: 93 + 39
20540 = square pyramidal number
20569 = tetranacci number
20593 = unique prime in base 12
20597 = k such that the sum of the squares of the first k primes is divisible by k.
20736 = 1442 = 124, 1000012, palindromic in base 15 (622615), also called a dozen great-gross in some duodecimal nomenclature.
20793 = little Schroeder number
20871 = The number of weeks in exactly 400 years in the Gregorian calendar
20903 = first prime of form 120k + 23 that is not a full reptend prime
21000 to 21999
21025 = 1452, palindromic in base 12 (1020112)
21147 = Bell number
21181 = the least of five remaining Seventeen or Bust numbers in the Sierpiński problem
21209 = number of reduced trees with 23 nodes
21637 = number of partitions of 37
21856 = octahedral number
21943 = Friedman prime
21952 = 283
21978 = reverses when multiplied by 4: 4 × 21978 = 87912
22000 to 22999
22050 = pentagonal pyramidal number
22140 = square pyramidal number
22222 = repdigit, Kaprekar number: 222222 = 493817284, 4938 + 17284 = 22222
22447 = cuban prime
22527 = Woodall number: 11 × 211 − 1
22621 = repunit prime in base 12
22699 = one of five remaining Seventeen or Bust numbers in the Sierpiński problem
23000 to 23999
23000 = number of primes .
23401 = Leyland number: 65 + 56
23409 = 1532, sum of the cubes of the first 17 positive integers
23497 = cuban prime
23821 = square pyramidal number
23833 = Padovan prime
23969 = octahedral number
23976 = pentagonal pyramidal number
24000 to 24999
24000 = number of primitive polynomials of degree 20 over GF(2)
24211 = Zeisel number
24336 = 1562, palindromic in base 5: 12343215
24389 = 293
24571 = cuban prime
24631 = Wedderburn–Etherington prime
24649 = 1572, palindromic in base 12: 1232112
24737 = one of five remaining Seventeen or Bust numbers in the Sierpinski problem
24742 = number of signed trees with 10 nodes
25000 to 25999
25011 = the smallest composite number, ending in 1, 3, 7, or 9, that in base 10 remains composite after any insertion of a digit
25085 = Zeisel number
25117 = cuban prime
25200 = 224th triangular number, 24th highly composite number, smallest number with exactly 90 factors
25205 = largest number whose factorial is less than 10100000
25482 = number of 21-bead necklaces (turning over is allowed) where complements are equivalent
25585 = square pyramidal number
25724 = Fine number
25920 = smallest number with exactly 70 factors
26000 to 26999
26015 = number of partitions of 38
26214 = octahedral number
26227 = cuban prime
26272 = number of 20-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
26861 = smallest number for which there are more primes of the form 4k + 1 than of the form 4k + 3 up to the number, against Chebyshev's bias
26896 = 1642, palindromic in base 9: 408049
27000 to 27999
27000 = 303
27405 = heptagonal number, hexadecagonal number, 48-gonal number, 80-gonal number, smallest integer that is polygonal in exactly 10 ways.
27434 = square pyramidal number
27559 = Zeisel number
27594 = number of primitive polynomials of degree 19 over GF(2)
27648 = 11 × 22 × 33 × 44
27653 = Friedman prime
27720 = 25th highly composite number; smallest number divisible by the numbers from 1 to 12 (there is no smaller number divisible by the numbers from 1 to 11 since any number divisible by 3 and 4 must be divisible by 12)
27846 = harmonic divisor number
27889 = 1672
28000 to 28999
28158 = pentagonal pyramidal number
28374 = smallest integer to start a run of six consecutive integers with the same number of divisors
28393 = unique prime in base 13
28547 = Friedman prime
28559 = nice Friedman prime
28561 = 1692 = 134 = 1192 + 1202, number that is simultaneously a square number and a centered square number, palindromic in base 12: 1464112
28595 = octahedral number
28657 = Fibonacci prime, Markov prime
28900 = 1702, palindromic in base 13: 1020113
29000 to 29999
29241 = 1712, sum of the cubes of the first 18 positive integers
29341 = Carmichael number
29370 = square pyramidal number
29527 = Friedman prime
29531 = Friedman prime
29601 = number of planar partitions of 18
29791 = 313
Primes
There are 983 prime numbers between 20000 and 30000.
References
20000 | 20,000 | [
"Mathematics"
] | 1,343 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,205 | https://en.wikipedia.org/wiki/30%2C000 | 30,000 (thirty thousand) is the natural number that comes after 29,999 and before 30,001.
Selected numbers in the range 30001–39999
30001 to 30999
30029 = primorial prime
30030 = primorial
30031 = smallest composite number which is one more than a primorial
30203 = safe prime
30240 = harmonic divisor number
30323 = Sophie Germain prime and safe prime
30420 = pentagonal pyramidal number
30537 = Riordan number
30694 = open meandric number
30941 = first base 13 repunit prime
31000 to 31999
31116 = octahedral number
31185 = number of partitions of 39
31337 = cousin prime, pronounced elite, an alternate way to spell 1337, an obfuscated alphabet made with numbers and punctuation, known and used in the gamer, hacker, and BBS cultures.
31395 = square pyramidal number
31397 = prime number followed by a record prime gap of 72, the first greater than 52
31688 = the number of years approximately equal to 1 trillion seconds
31721 = start of a prime quadruplet
31929 = Zeisel number
32000 to 32999
32043 = smallest number whose square is pandigital.
32045 = can be expressed as a sum of two squares in more ways than any smaller number.
32760 = harmonic divisor number
32761 = 1812, centered hexagonal number
32767 = 215 − 1, largest positive value for a signed (two's complement) 16-bit integer on a computer.
32768 = 215 = 85 = 323, maximum absolute value of a negative value for a signed (two's complement) 16-bit integer on a computer.
32800 = pentagonal pyramidal number
32993 = Leyland prime using 2 & 15 (215 + 152)
33000 to 33999
33333 = repdigit
33461 = Pell number, Markov number
33511 = square pyramidal number
33781 = octahedral number
34000 to 34999
34560 = 5 superfactorial
34790 = number of non-isomorphic set-systems of weight 13.
34841 = start of a prime quadruplet
34969 = favorite number of the Muppet character Count von Count
35000 to 35999
35720 = square pyramidal number
35840 = number of ounces in a long ton (2,240 pounds)
35890 = tribonacci number
35899 = alternating factorial
35937 = 333, chiliagonal number
35964 = digit-reassembly number
36000 to 36999
36100 = sum of the cubes of the first 19 positive integers
36463 – number of parallelogram polyominoes with 14 cells
36594 = octahedral number
37000 to 37999
37338 = number of partitions of 40
37378 = semi-meandric number
37634 = third term of the Lucas–Lehmer sequence
37666 = Markov number
37926 = pentagonal pyramidal number
38000 to 38999
38024 = square pyramidal number
38209 = n such that n | (3n + 5)
38305 = the largest Forges-compatible number (for index 32) to the field . But a conjecture of Viggo Brun predicts that there are infinitely many such numbers for any Galois field unless is bad.
38416 = 144
38501 = 74 + 1902: Friedlander-Iwaniec prime. Smallest prime separated by at least 40 from the nearest primes (38461 and 38543). It is thus an isolated prime. Chen prime.
38807 = number of non-equivalent ways of expressing 10,000,000 as the sum of two prime numbers
38962 = Kaprekar number
39000 to 39999
39299 = Integer connected with coefficients in expansion of Weierstrass P-function
39304 = 343
39559 = octahedral number
39648 = tetranacci number
Primes
There are 958 prime numbers between 30000 and 40000.
References
30000 | 30,000 | [
"Mathematics"
] | 892 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,211 | https://en.wikipedia.org/wiki/40%2C000 | 40,000 (forty thousand) is the natural number that comes after 39,999 and before 40,001. It is the square of 200.
Selected numbers in the range 40001–49999
40001 to 40999
40320 = smallest factorial (8!) that is not a highly composite number
40425 = square pyramidal number
40585 = largest factorion
40678 = pentagonal pyramidal number
40755 = the smallest number >1 to be a Triangular number, Pentagonal number, and a Hexagonal number. Additionally, it's a 390-gonal, 4077-gonal, and 13586-gonal number.
40804 = palindromic square
41000 to 41999
41041 = Carmichael number
41472 = 3-smooth number, number of reduced trees with 24 nodes
41586 = Large Schröder number
41616 = triangular square number
41835 = Motzkin number
41841 = 1/41841 = 0.0000239 is a repeating decimal with period 7
42000 to 42999
42680 = octahedral number
42875 = 353
42925 = square pyramidal number
43000 to 43999
43261 = Markov number
43380 = number of nets of a dodecahedron
43390 = number of primes .
43560 = pentagonal pyramidal number
43691 = Wagstaff prime
43777 = smallest member of a prime sextuplet
44000 to 44999
44044 = palindrome of 79 after 6 iterations of the "reverse and add" iterative process
44100 = sum of the cubes of the first 20 positive integers 44,100 Hz is a common sampling frequency in digital audio (and is the standard for compact discs).
44444 = repdigit
44583 = number of partitions of 41
44721 = smallest positive integer such that the expression − ≤ 10−9
44724 = maximum number of days in which a human being has been verified to live (Jeanne Calment).
44944 = palindromic square
45000 to 45999
45360 = 26th highly composite number; smallest number with exactly 100 factors (including one and itself)
46000 to 46999
46080 = double factorial of 12
46233 = sum of the first eight factorials
46249 = 2nd number that can be written as in 3 ways
46368 = Fibonacci number
46656 = 216 = 36 = 66, 3-smooth number
46657 = Carmichael number
46972 = number of prime knots with 14 crossings
47000 to 47999
47058 = primary pseudoperfect number
47160 = 10-th derivative of xx at x=1
47321/33461 ≈ √2
48000 to 48999
48629 = number of trees with 17 unlabeled nodes
48734 = number of 22-bead necklaces (turning over is allowed) where complements are equivalent
49000 to 49999
49151 = Woodall number
49152 = 3-smooth number
49726 = pentagonal pyramidal number
49940 = number of 21-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
Primes
There are 930 prime numbers between 40000 and 50000.
References
40000 | 40,000 | [
"Mathematics"
] | 721 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,216 | https://en.wikipedia.org/wiki/50%2C000 | 50,000 (fifty thousand) is the natural number that comes after 49,999 and before 50,001.
Selected numbers in the range 50001–59999
50001 to 50999
50069 = 11 + 22 + 33 + 44 + 55 + 66
50400 = 27th highly composite number
50625 = 154, smallest fourth power that can be expressed as the sum of only five distinct fourth powers, palindromic in base 14 (1464114)
50653 = 373, palindromic in base 6 (10303016)
51000 to 51999
51076 = 2262, palindromic in base 15 (1020115)
51641 = Markov number
51984 = 2282 = 373 + 113, the smallest square to the sum of only five distinct fourth powers.
52000 to 52999
52488 = 3-smooth number
52633 = Carmichael number
53000 to 53999
53016 = pentagonal pyramidal number
53174 = number of partitions of 42
53361 = 2312 sum of the cubes of the first 21 positive integers
54000 to 54999
54205 = Zeisel number
54688 = 2-automorphic number
54748 = narcissistic number
54872 = 383, palindromic in base 9 (832389)
54901 = chiliagonal number
55000 to 55999
55296 = 3-smooth number
55440 = the 9th superior highly composite number; the 9th colossally abundant number, the 28th highly composite number.
55459 = one of five remaining Seventeen or Bust numbers in the Sierpinski problem
55555 = repdigit
55860 = harmonic divisor number
55987 = repunit prime in base 6
56000 to 56999
56011 = Wedderburn-Etherington number
56092 = the number of groups of order 256, see
56169 = 2372, palindromic in octal (155518)
56448 = pentagonal pyramidal number
57000 to 57999
57121 = 2392, palindromic in base 14 (16B6114)
58000 to 58999
58081 = 2412, palindromic in base 15 (1232115)
58367 = smallest integer that cannot be expressed as a sum of fewer than 1079 tenth powers
58786 = Catalan number
58921 = Friedman prime
59000 to 59999
59049 = 2432 = 95 = 310
59051 = Friedman prime
59053 = Friedman prime
59081 = Zeisel number
59263 = Friedman prime
59273 = Friedman prime
59319 = 393
59536 = 2442, palindromic in base 11 (4080411)
Primes
There are 924 prime numbers between 50000 and 60000.
References
50000 | 50,000 | [
"Mathematics"
] | 625 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,221 | https://en.wikipedia.org/wiki/60%2C000 | 60,000 (sixty thousand) is the natural number that comes after 59,999 and before 60,001. It is a round number. It is the value of (75025).
Selected numbers in the range 60,000–69,999
60,001 to 60,999
60,049 = Leyland number using 3 & 10 (310 + 103)
60,101 = smallest prime with period of reciprocal 100
61,000 to 61,999
61,776 = 24 x 33 x 11 x 13 = 15 + 25 + 35 + 45 + 55 + 65 + 75 + 85. It is an untouchable number, a triangular number, hexagonal number, 100-gonal number, and is polygonal in 6 other ways.
62,000 to 62,999
62,208 = 3-smooth number
62,210 = Markov number
62,745 = Carmichael number
63,000 to 63,999
63,020 = amicable number with 76084
63,261 = number of partitions of 43
63,360 = inches in a mile
63,600 = number of free 12-ominoes
63,750 = pentagonal pyramidal number
63,973 = Carmichael number
64,000 to 64,999
64,000 = 403
64,009 = sum of the cubes of the first 22 positive integers
64,079 = Lucas number
64,442 = Number of integer degree intersections on Earth: 360 longitudes * 179 latitudes + 2 poles = 64442.
64,620 : It is an untouchable number, a triangular number, hexagonal number, and a number such that pi(64620) = 64620/10.
65,000 to 65,999
65,025 = 2552, palindromic in base 11 (4494411)
65,535 = largest value for an unsigned 16-bit integer on a computer.
65,536 = 216 = 48 = 164 = 2562 also 2↑↑4=2↑↑↑3 using Knuth's up-arrow notation, smallest integer with exactly 17 divisors, palindromic in base 15 (1464115), number of directed graphs on 4 labeled nodes
65,537 = largest known Fermat prime
65,539 = the 6544th prime number, and both 6544 and 65539 have digital root of 1; a regular prime; a larger member of a twin prime pair; a smaller member of a cousin prime pair; a happy prime; a weak prime; a middle member of a prime triplet, (65537, 65539, 65543); a middle member of a three-term primes in arithmetic progression, (65521, 65539, 65557).
65,792 = Leyland number using 2 & 16 (216 + 162)
66,000 to 66,999
66,012 = tribonacci number
66,049 = 2572, palindromic in hexadecimal (1020116)
66,198 = Giuga number
66,666 = repdigit
67,000 to 67,999
67,081 = 2592, palindromic in base 6 (12343216)
67,171 = 16 + 26 + 36 + 46 + 56 + 66
67,607 = largest of five remaining Seventeen or Bust numbers in the Sierpiński problem
67,626 = pentagonal pyramidal number
68,000 to 68,999
68,906 = number of prime numbers having six digits.
68,921 = 413
69,000 to 69,999
69,632 = Leyland number using 4 & 8 (48 + 84)
69,696 = square of 264; only known palindromic square that can be expressed as the sum of a pair of twin primes: 69,696 = 34847 + 34849.
69,984 = 3-smooth number
Primes
There are 878 prime numbers between 60000 and 70000.
References
60000 | 60,000 | [
"Mathematics"
] | 847 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,227 | https://en.wikipedia.org/wiki/70%2C000 | 70,000 (seventy thousand) is the natural number that comes after 69,999 and before 70,001. It is a round number.
Selected numbers in the range 70001–79999
70001 to 70999
70030 = largest number of digits of π that have been recited from memory
71000 to 71999
71656 = pentagonal pyramidal number
72000 to 72999
72771 = 3 x 127 x 191, is a sphenic number, triangular number, and hexagonal number.
73000 to 73999
73296 = is the smallest number n, for which n−3, n−2, n−1, n+1, n+2, n+3 are all Sphenic number.
73440 = 15 × 16 × 17 × 18
73712 = number of n-Queens Problem solutions for n = 13
73728 = 3-smooth number
74000 to 74999
74088 = 423 = 23 * 33 * 73
74353 = Friedman prime
74897 = Friedman prime
75000 to 75999
75025 = Fibonacci number, Markov number
75175 = number of partitions of 44
75361 = Carmichael number
76000 to 76999
76084 = amicable number with 63020
76424 = tetranacci number
77000 to 77999
77777 = repdigit
77778 = Kaprekar number
78000 to 78999
78125 = 57
78163 = Friedman prime
78498 = the number of primes under 1,000,000
78557 = conjectured to be the smallest Sierpiński number
78732 = 3-smooth number
79000 to 79999
79507 = 433
Primes
There are 902 prime numbers between 70000 and 80000.
References
70000 | 70,000 | [
"Mathematics"
] | 400 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,231 | https://en.wikipedia.org/wiki/80%2C000 | 80,000'
(eighty thousand) is the natural number after 79,999 and before 80,001.
Selected numbers in the range 80,000–89,999
80,782 = Pell number P''14
81,081 = smallest abundant number ending in 1, 3, 7, or 9
81,181 = number of reduced trees with 25 nodes
82,000 = the only currently known number greater than 1 that can be written in bases from 2 through 5 using only 0s and 1s.Sequence A258107 in The On-Line Encyclopedia of Integer Sequences
82,025 = number of primes .
82,467 = number of square (0,1)-matrices without zero rows and with exactly 6 entries equal to 1
82,656 = Kaprekar number: 826562 = 6832014336; 68320 + 14336 = 82656
82,944 = 3-smooth number: 210 × 34
83,097 = Riordan number
83,160 = the 29th highly composite number
83,357 = Friedman prime
83,521 = 174
84,187 – number of parallelogram polyominoes with 15 cells.
84,375 = 33×55
84,672 = number of primitive polynomials of degree 21 over GF(2)
85,085 = product of five consecutive primes: 5 × 7 × 11 × 13 × 17
85,184 = 443
86,400 = seconds in a day: 24 × 60 × 60 and common DNS default time to live
87,360 = unitary perfect number
88,789 = the start of a prime 9-tuple, along with 88793, 88799, 88801, 88807, 88811, 88813, 88817, and 88819.
88,888 = repdigit
89,134''' = number of partitions of 45
Primes
There are 876 prime numbers between 80000 and 90000.
See also
80,000 Hours, a British social impact career advisory organization
References
80000 | 80,000 | [
"Mathematics"
] | 432 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
13,280,234 | https://en.wikipedia.org/wiki/90%2C000 | 90,000 (ninety thousand) is the natural number following 89,999 and preceding 90,001. It is the sum of the cubes of the first 24 positive integers, and is the square of 300.
Selected numbers in the range 90,000–99,999
90,625 = the only five-digit automorphic number: 906252 = 8212890625
91,125 = 453
91,144 = Fine number
92,205 = number of 23-bead necklaces (turning over is allowed) where complements are equivalent
92,706 = There is a math puzzle called KAYAK + KAYAK + KAYAK + KAYAK + KAYAK + KAYAK = SPORT, where each letter represents a digit. When one solves the puzzle, KAYAK = 15451, and when one added this up, SPORT = 92,706.
93,312 = Leyland number: 66 + 66. Also a 3-smooth number.
94,249 = palindromic square: 3072
94,932 = Leyland number: 75 + 57
95,121 = Kaprekar number: 951212 = 9048004641; 90480 + 04641 = 95121
95,420 = number of 22-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
96,557 = Markov number: 52 + 64662 + 965572 = 3 × 5 × 6466 × 96557
97,336 = 463, the largest 5-digit cube
98,304 = 3-smooth number
99,066 = largest number whose square uses all of the decimal digits once: 990662 = 9814072356. It is also strobogrammatic in decimal.
99,856 = 3162, the largest 5-digit square
99,991 = largest five-digit prime number
99,999 = repdigit, Kaprekar number: 999992 = 9999800001; 99998 + 00001 = 99999
Primes
There are 879 prime numbers between 90000 and 100000.
References
External links
90000 | 90,000 | [
"Mathematics"
] | 454 | [
"Mathematical objects",
"Number stubs",
"Elementary mathematics",
"Integers",
"Numbers"
] |
13,280,509 | https://en.wikipedia.org/wiki/Molecular%20propeller | A Molecular propeller is a molecule that can propel fluids when rotated, due to its special shape that is designed in analogy to macroscopic propellers: it has several molecular-scale blades attached at a certain pitch angle around the circumference of a shaft, aligned along the rotational axis.
The molecular propellers designed in the group of Prof. Petr Král from the University of Illinois at Chicago have their blades formed by planar aromatic molecules and the shaft is a carbon nanotube. Molecular dynamics simulations show that these propellers can serve as efficient pumps in the bulk and at the surfaces of liquids. Their pumping efficiency depends on the chemistry of the interface between the blades and the liquid. For example, if the blades are hydrophobic, water molecules do not bind to them, and the propellers can pump them well. If the blades are hydrophilic, water molecules form hydrogen bonds with the atoms in the polar blades. This can largely block the flow of other water molecules around the blades and significantly slow down their pumping.
Driving
Molecular propellers can be rotated by molecular motors that can be driven by chemical, biological, optical and electrical means, or various ratchet-like mechanisms. Nature realizes most biological activities with a large number of highly sophisticated molecular motors, such as myosin, kinesin, and ATP synthase. For example, rotary molecular motors attached to protein-based tails called flagella can propel bacteria.
Applications
In a similar way, the assembly of a molecular propeller and a molecular motor can form a nanoscale machine that can pump fluids or perform locomotion. Future applications of these nanosystems range from novel analytical tools in physics and chemistry, drug delivery and gene therapy in biology and medicine, advanced nanofluidic lab-on-a-chip techniques, to tiny robots performing various activities at the nanoscale or microscale.
See also
Molecular modelling
Molecular motor
Nanocar
Nanotechnology
References
External links
University of Illinois at Chicago press release
Technology & Science highlight in CBC news
Highlight of molecular propeller in Nature
Highlight of molecular propeller in Nature Nanotechnology
Molecular machines | Molecular propeller | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 421 | [
"Physical systems",
"Nanotechnology",
"Machines",
"Molecular machines"
] |
13,281,304 | https://en.wikipedia.org/wiki/V391%20Pegasi | V391 Pegasi, also catalogued as HS 2201+2610, is a blue-white subdwarf star approximately 4,400 light-years away in the constellation of Pegasus. The star is classified as an "extreme horizontal branch star". It is small, with only half the mass and a bit less than one quarter the diameter of the Sun. It has luminosity 34 times that of the Sun. It could be quite old, perhaps in excess of 10 Gyr. It is believed that the star's mass when it was still on the main sequence was between 0.8 and 0.9 times that of the Sun.
In 2001, Roy Østensen et al. announced that the star, then called HS 2201+2610, is a variable star. It was given its variable star designation, V391 Pegasi, in 2003. It is a pulsating variable star of the V361 Hydrae type (or also called sdBVr type).
Formation
Subdwarf B stars such as V391 Pegasi are thought to be the result of the ejection of the hydrogen envelope of a red giant star at or just before the onset of helium fusion. The ejection left only a tiny amount of hydrogen on the surface—less than 1/1000 of the total stellar mass. The future for the star is to eventually cool down to make a low-mass white dwarf. Most stars retain more of their hydrogen after the first red giant phase, and eventually become asymptotic giant branch stars. The reason that some stars, like V391 Pegasi, lose so much mass is not well known. At the tip of the red-giant branch, the red giant precursors of the subdwarf stars reach their maximum radius, on the order of 0.7 AU. After this point, the hydrogen envelope is lost and helium fusion begins—this is known as the helium flash.
Hypothesized planetary system
In 2007, research using the variable star timing method indicated the presence of a gas giant planet orbiting V391 Pegasi. This planet was designated V391 Pegasi b. This planet around an "extreme horizontal branch" star provided clues about what could happen to the planets in the Solar System when the Sun turns into a red giant within the next 5 billion years.
However, subsequent research published in 2018, taking the large amount of new photometric time-series data amassed since the publication of the original data into account, found evidence both for and against the exoplanet's existence. Although the planet's existence was not disproved, the case for its existence was now certainly weaker, and the authors stated that it "requires confirmation with an independent method."
References
Sources
External links
B-type subdwarfs
Pegasus (constellation)
Very rapidly pulsating hot stars
Pegasi, V391 | V391 Pegasi | [
"Astronomy"
] | 594 | [
"Pegasus (constellation)",
"Constellations"
] |
13,281,619 | https://en.wikipedia.org/wiki/Floor%20drain | A floor drain is a plumbing fixture that is installed in the floor of a structure, mainly designed to remove any standing water near it. They are usually round, but can also be square or rectangular. They usually range from ; most are in diameter. They have gratings that are made of metal or plastic. The floor around the drain is also level or sloped to allow the water to flow to the drain.
Many residential basements have one or more floor drains, usually near a water heater or washer/dryer. Floor drains can also be found in commercial basements, restrooms, kitchens, refrigerator areas, locker/shower rooms, laundry facilities, and near swimming pools, among other places.
A floor drain should always have a strainer secured over it to prevent injury, entry of foreign objects, or introduction of unwanted pests into the facility. However, if the strainer is not smooth enough, hair and other objects can still get stuck in it, clogging the drain. Usually, each floor drain is connected to a trap, to prevent sewer gases from escaping into indoors spaces.
A floor sink is a type of floor drain primarily used as an indirect waste receptor. It is generally deeper than a standard floor drain and can have a full or partial grate, or no grate as required to accommodate the indirect waste pipes. It usually has a dome strainer in the bottom to prevent splash-back. The body material can be epoxy coated or enameled cast iron, stainless steel, or PVC. Floor sinks are found in commercial kitchens and some hospital applications.
Standards
The American Society of Mechanical Engineers publishes standard A112.6.3, on floor and trench drains.
References
Plumbing | Floor drain | [
"Engineering"
] | 349 | [
"Construction",
"Plumbing"
] |
13,281,762 | https://en.wikipedia.org/wiki/Komatsu%20LAV | The is a Japanese military vehicle first produced in 2002. Currently used exclusively by the Japan Self-Defense Force (JSDF), it has seen use in the Iraq War. It is built by Komatsu Limited. Defense Systems Division in Komatsu, Ishikawa, Japan. Komatsu's factory designation for the vehicle is KU50W.
The exterior resembles the Panhard VBL used by the French army, but the LAV has 4 doors and a large cabin for carrying soldiers. The LAV can also be transported by air in vehicles like the CH-47J and the C-130H.
History
The Komatsu LAV was developed in 1997 to meet a JGSDF need for an armored wheeled vehicle that could provide armored protection since their Toyota High Mobility Vehicles and Mitsubishi Type 73 light trucks were not adequate to provide protection from small arms fire. They were initially created with the concept of a potential Soviet invasion during the Cold War before they were relegated to anti-terrorist/invasion operations.
It had made its first appearance in Kuwait when JGSDF units had deployed the Komatsu LAV prior to humanitarian operations in Samawah, a city in Iraq, 280 km (174 mi) southeast of Baghdad. An initial 400 LAVs were brought into JGSDF service in March 2005. JASDF base security units are also equipped with the LAV as a main vehicle for patrols.
In February 2019, Komatsu formally announced it will no longer develop new models of the LAV, citing high cost in developing a new model and low profit return when production was first halted in 2017.
Variants
No variants are known to be available, but the vehicle appears to have been built in at least three production models, namely KU50W-0002K, KU50W-0003K and KU50W-0005K.
Design
The Komatsu LAV has open-split roof hatch of the vehicle provides additional protection to the gunner from all directions, if it is locked in an upright position. The vehicles deployed in Iraq are fitted with reinforced bulletproof windshields, wire cutters and an armoured tub around the gun mount for extra protection.
According to reports, the vehicle is bulletproof against 5.56 and 7.62 bullets. It is unknown whether other bullet calibers can easily penetrate the LAV or not.
For the LAV's engine, it is fitted with a liquid cooled 4-cycle diesel engine of 160 hp. The power pack is mounted centre forward of the vehicle to distribute weight more evenly between the axles. The propulsion system provides a top speed of 100 km/h, traveling more than 200 miles without refueling. It is fitted with all run-flat tires. The low turning radius allows the vehicle to negotiate narrow passages.
The Komatsu LAV can be armed with the Sumitomo M249 LMG or Sumitomo M2HB 12.7mm machine gun for anti-personnel duties. It can also mount the Type 01 LMAT or a Kawasaki Type 87 anti-tank missile for anti-armored missions. Smoke grenade dischargers can be mounted on the rear sides of the vehicle.
Operators
See also
Otokar Cobra of Turkey
Véhicule Blindé Léger of France
Sources
References
Kenkyusha's New Japanese-English Dictionary, Kenkyusha Limited, Tokyo 1991,
External links
Official JGSDF Page.
Armoured cars of Japan
Wheeled reconnaissance vehicles
Internal security vehicles
Japan Ground Self-Defense Force
Armoured fighting vehicles of Japan
LAV
Post–Cold War military equipment of Japan
Military vehicles introduced in the 2000s
Military light utility vehicles
Komatsu Limited | Komatsu LAV | [
"Engineering"
] | 733 | [
"Engineering vehicles",
"Komatsu vehicles"
] |
13,281,848 | https://en.wikipedia.org/wiki/Desorption%20electrospray%20ionization | Desorption electrospray ionization (DESI) is an ambient ionization technique that can be coupled to mass spectrometry (MS) for chemical analysis of samples at atmospheric conditions. Coupled ionization sources-MS systems are popular in chemical analysis because the individual capabilities of various sources combined with different MS systems allow for chemical determinations of samples. DESI employs a fast-moving charged solvent stream, at an angle relative to the sample surface, to extract analytes from the surfaces and propel the secondary ions toward the mass analyzer. This tandem technique can be used to analyze forensics analyses, pharmaceuticals, plant tissues, fruits, intact biological tissues, enzyme-substrate complexes, metabolites and polymers. Therefore, DESI-MS may be applied in a wide variety of sectors including food and drug administration, pharmaceuticals, environmental monitoring, and biotechnology.
History
DESI has been widely studied since its inception in 2004 by Zoltan Takáts, Justin Wiseman and Bogdan Gologan, in Graham Cooks' group from Purdue University with the goal of looking into methods that didn't require the sample to be inside of a vacuum. Both DESI and direct analysis in real time (DART) have been largely responsible for the rapid growth in ambient ionization techniques, with a proliferation of more than eighty new techniques being found today. These methods allow for complex systems to be analyzed without preparation and throughputs as high as 45 samples a minute. DESI is a combination of popular techniques, such as, electrospray ionization and surface desorption techniques. Electrospray ionization with mass spectrometry was reported by Malcolm Dole in 1968, but John Bennett Fenn was awarded a nobel prize in chemistry for the development of ESI-MS in the late 1980s. Then in 1999, desorption of open surface and free matrix experiments were reported in the literature utilizing an experiment that was called desorption/ionization on silicon. The combination of these two advancements led to the introduction of DESI and DART as the main ambient ionization techniques that would later become multiple different techniques. One in particular, due to increasing studies into optimization of DESI, is nanospray desorption electrospray ionization (nano-DESI). In this technique the analyte is desorbed into a liquid bridge formed between two capillaries and the analysis surface.
Principle of operation
DESI is a combination of electrospray (ESI) and desorption (DI) ionization methods.
Ionization takes place by directing an electrically charged mist to the sample surface that is a few millimeters away. The electrospray mist is pneumatically directed at the sample where subsequent splashed droplets carry desorbed, ionized analytes. After ionization, the ions travel through air into the atmospheric pressure interface which is connected to the mass spectrometer. DESI is a technique that allows for ambient ionization of a trace sample at atmospheric pressure, with little sample preparation. DESI can be used to investigate in situ, secondary metabolites specifically looking at both spatial and temporal distributions.
Ionization mechanism
In DESI there are two kinds of ionization mechanism, one that applies to low molecular weight molecules and another to high molecular weight molecules. High molecular weight molecules, such as proteins and peptides show electrospray like spectra where multiply charged ions are observed. This suggests desorption of the analyte, where multiple charges in the droplet can easily be transferred to the analyte. The charged droplet hits the sample, spreads over a diameter greater than its original diameter, dissolves the protein and rebounces. The droplets travel to the mass spectrometer inlet and are further desolvated. The solvent typically used for the electrospray is a combination of methanol and water.
For the low molecular weight molecules, ionization occurs by charge transfer: an electron or a proton. There are three possibilities for the charge transfer. First, charge transfer between a solvent ion and an analyte on the surface. Second, charge transfer between a gas phase ion and analyte on the surface; in this case the solvent ion is evaporated before reaching the sample surface. This is achieved when the spray to surface distance is large. Third, charge transfer between a gas phase ion and a gas phase analyte molecule. This occurs when a sample has a high vapour pressure.
The ionization mechanism of low molecular weight molecules in DESI is similar to DART's ionization mechanism, in that there is a charge transfer that occurs in the gas phase.
Ionization efficiency
The ionization efficiency of DESI is complex and depends on several parameters such as, surface effects, electrospray parameters, chemical parameters and geometric parameters. Surface effects include chemical composition, temperature and electric potential applied. Electrospray parameters include electrospray voltage, gas and liquid flow rates.
Chemical parameters refers to the sprayed solvent composition, e.g. addition of NaCl. Geometric parameters are α, β, d1 and d2 (see figure on the right).
Furthermore, α and d1 affect the ionization efficiency, while β and d2 affect the collection efficiency. Results of a test performed on a variety of molecules to determine optimal α and d1 values show that there are two sets of molecules: high molecular weight (proteins, peptides, oligosaccharide etc.) and low molecular weight (diazo dye, stereoids, caffeine, nitroaromatics etc.). The optimal conditions for the high molecular weight group are high incident angles (70–90°) and short d1 distances (1–3 mm). The optimal conditions for the low molecular weight group are the opposite, low incident angles (35–50°) and long d1 distances (7–10 mm). These test results indicate that each group of molecules has a different ionization mechanism; described in detail in the Principle of operation section.
The sprayer tip and the surface holder are both attached to a 3D moving stage which allow to select specific values for the four geometric parameters: α, β, d1 and d2.
Applications
Laser ablation electrospray ionization
Laser ablation electrospray ionization (LAESI) mass spectrometry is an ambient ionization technique applicable to plant and animal tissue imaging, live-cell imaging, and most recently to cell-by-cell imaging. This technique uses a mid-IR laser to ablate the sample which creates a cloud of neutral molecules. This cloud is then hit with the electrospray from above to cause ionization. The desorbed ions are then able to pass into the mass spectrometer for analysis. This method is also good for imaging in applications. The analyses can be desorbed through a pulsed laser irradiation without the need of a matrix. This method is best used with small organic molecules up to larger biomolecules as well.
Matrix assisted laser desorption electrospray ionization
Another method good for biomolecules is matrix assisted laser desorption electrospray ionization (MALDESI). In this technique, it utilizes Infrared laser ionization to excite the sample molecules to allow for the desorbed ions to be ready for MS analysis. The geometry of the source and the distance between the ESI and matrix will have and effect on the efficiency of the sample compound. This technique can also be used with aqueous samples as well. The water droplet can be placed at the focal point of the laser, or the droplet can be dried to form the solid. Planar samples do not need sample preparation to perform this experiment.
Ion mobility mass spectrometry
Ion mobility spectrometry (IMS) is a technique of ion separation in gaseous phases based on their differences in ion mobility when an electric field is applied providing spatial separation prior to MS analysis. With the introduction of DESI as an ion source for ion mobility mass spectrometry, applications for IMS have expanded from only vapor-phase samples with volatile analyses to also intact structures and aqueous samples. When coupled to a time-of-flight mass spectrometer, analysis of proteins is also possible. These techniques work in tandem to one another to investigate ion shapes and reactiveness after ionization. A key characteristic of this setup is its ability to separate the distribution of ions generated in DESI prior to mass spectrometry analysis.
Fourier transform ion cyclotron resonance
As stated before, DESI allows for a direct investigation of natural samples without needing any sample preparation or chromatographic separation. But, because of this unneeded sample prep the spectrum created maybe very complex. Therefore, you can couple a Fourier transform ion cyclotron resonance to DESI, allowing for a higher resolution. The DESI can be composed of six linear moving stages and one rotating stage. This can include a 3-D linear stage for samples and another with the rotating stage for the spray mount. Coupling of an FTICR to DESI can increase mass accuracy to below 3 parts per million. This can be done on both liquid and solid samples.
Liquid chromatography
DESI can be coupled to ultra-fast liquid chromatography using an LC eluent splitting strategy. It is a strategy through a tiny orifice on an LC capillary tube. There is negligible dead volume and back pressure that allows for almost real time mass spectrometry detection with a fast elution and purification. This coupling can be used to ionize a wide range of molecules, from small organics to high mass proteins. This is different from ESI (electrospray ionization) in that it can be used to directly analyze salt-containing sample solutions without requiring “make-up” solvents/ acids to be doped into the sample. This set up allows for a high flow rate without splitting. The high resolution that is accomplished by reverse-phase HPLC can be combined with this procedure to produce high throughput screening of natural products as well. The incorporation of the electrochemistry component helps with ionization efficiency via the electrochemical conversion. This method is proved better than ESI in the fact that you don't have to separate the small potential that is applied to the cell from the potential on the spray in DESI. DESI also shows a better tolerance to inorganic salt electrolytes and you can use traditional solvents used in electrolysis.
Instrumentation
In DESI, there is a high-velocity pneumatically assisted electrospray jet that is continually directed towards the probe surface. The jet forms a micrometer-size thin solvent film on the sample where it can be desorbed. The sample can be dislodged by the incoming spray jet allowing for particles to come off in an ejection cone of analyte containing secondary ion droplets. A lot of study is still going into looking at the working principals of DESI but there are still some things known. The erosion diameter of the spray spot formed by DESI is known to be directly tied to the spatial resolution. Both the chemical composition and the texture of the surface will also affect the ionization process. The nebulizing gas used most commonly is N2 set at a typical pressure of 160 psi. The solvent is a combination of methanol and water, sometimes paired with 0.5% acetic acid and at a flow rate of 10 μL/min. The surface can be mounted it two different ways, one way consists of a surface holder that can carry 1 x 5 cm large disposable surface slides that lie on a stainless steel surface. The steel surface has a voltage applied to provide an appropriate surface potential. The surface potential that can be applied is the same at which the sprayer can be set at. The second surface is made with an aluminum block that has a built in heater, this allows for temperature control with temperatures up to 300 °C with newer stages having built in CCD's and light sources. Their spectra are that similar to ESI. They feature multiply charged ions alkali metal adducts and non covalent complexes that originate from the condensed phase of the sample/solvent interaction. DESI is revealed to have a more gentle ionization condition that leads to a more pronounced tendency for metal adduct formation and a lower specific charging of secondary droplets.
See also
Secondary ion mass spectrometry
Matrix-assisted laser desorption ionization
Mass spectrometry imaging
Electrospray ionization
Secondary electrospray ionization
References
Further reading
Method and System for Desorption Electrospray Ionization -
Method and System for Desorption Electrospray Ionization -
Ionization by droplet impact -
External links
Purdue Aston Lab - DESI
Professor Zoltan Takáts personal page - Imperial College London
Ion source
Mass spectrometry | Desorption electrospray ionization | [
"Physics",
"Chemistry"
] | 2,635 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Ion source",
"Mass spectrometry",
"Matter"
] |
13,281,934 | https://en.wikipedia.org/wiki/Gregg%20Thompson%20%28astronomer%29 | Gregg D. Thompson of Brisbane, Australia is an amateur astronomer.
Astronomy
Gregg Thompson was one of the founding members of the Southern Astronomical Society (SAS).
Before 1981 he started making a set of charts of bright galaxies, designed to help deep sky observers in their search for extragalactic supernovae.
In 1985 he received the Amateur Achievement Award of the Astronomical Society of the Pacific, together with Robert Owen Evans,
who had made several supernova discoveries using Thompson's charts.
Evans wrote that the number of galaxies he was able to observe grew substantially after the charts were produced. Gregg Thompson also helped verify some of Evans' discoveries.
Public outreach
In 1990 Gregg Thompson co-authored with James T. Bryan, Jr. the astronomical atlas The Supernova Search Charts and Handbook, containing 248 comparison charts of 345 of the brightest galaxies,
highly valued especially by supernova hunters and recommended by the Supernova Search Committee of the American Association of Variable Star Observers.
In 1993 he published The Australian Guide to Stargazing, a manual for both naked-eye and telescope observing of the sky of the southern hemisphere with explanatory diagrams, photographs and detailed drawings, describing the basics of the night sky observation to novice amateur astronomers.
References
Year of birth missing (living people)
Living people
20th-century Australian astronomers
Amateur astronomers
21st-century Australian astronomers | Gregg Thompson (astronomer) | [
"Astronomy"
] | 269 | [
"Astronomers",
"Amateur astronomers"
] |
13,283,214 | https://en.wikipedia.org/wiki/Jurjen%20Ferdinand%20Koksma | Jurjen Ferdinand Koksma (21 April 1904, Schoterland – 17 December 1964, Amsterdam) was a Dutch mathematician who specialized in analytic number theory.
Koksma received his Ph.D. degree (cum laude) in 1930 at the University of Groningen under supervision of Johannes van der Corput, with a thesis on Systems of Diophantine Inequalities. Around the same time, aged 26, he was invited to become full professor at the Vrije Universiteit Amsterdam. He accepted and in 1930 became the first professor in mathematics at this university. Koksma is also one of the founders of the Dutch Mathematisch Centrum (today Centrum Wiskunde & Informatica).
One of Koksma's main works was the book Diophantische Approximationen, published in 1936 by Springer. He also wrote several papers with Paul Erdős.
In 1950 he became member of the Royal Netherlands Academy of Arts and Sciences.
Koksma had two brothers, Jan and Marten, who were also mathematicians.
See also
Denjoy–Koksma inequality
Koksma's equivalent classification
Koksma–Hlawka inequality
Erdős–Turán–Koksma inequality
References
Literature
Arie van Deursen: The distinctive character of the Free University in Amsterdam, 1880-2005, Eerdmans Publishing (2008).
1904 births
1964 deaths
20th-century Dutch mathematicians
Members of the Royal Netherlands Academy of Arts and Sciences
Number theorists
People from Heerenveen
University of Groningen alumni | Jurjen Ferdinand Koksma | [
"Mathematics"
] | 316 | [
"Number theorists",
"Number theory"
] |
13,283,342 | https://en.wikipedia.org/wiki/Mining%20lamp | A mining lamp is a lamp, developed for the rigid necessities of underground mining operations. Most often it is worn on a hard hat in the form of a headlamp.
History
Types
1813 Dr William Reid Clanny Exhibited The Clanny Lamp
1815 Humphry Davy Exhibited The Davy Lamp
1815 George Stephenson Exhibited his Lamp
The Davey Safety Lamp was made in London by Humphry Davy. George Stephenson invented a similar lamp but Davys invention was safer due to it having a fine wire gauze that surrounded the flame. This enabled the light to pass through and reduced the risk of explosion by stopping the "firedamp" methane gas coming in contact with the flame.
1840 Mathieu Mueseler Exhibited The Museler Lamp in Belgium.
1859 William Clark patented the first electrical mining lamp.
1870s J.B.Marsaut (France) double gauze design
1872 Coal Mines Regulation Act required locked lamps under certain conditions
1881 Joseph Swan exhibited his first electric lamp
1882 Made by William Reid Clanny invented a 'bonnetted' Clanny lamp,
1883 Elliis Lever of Culcheth Hall Bowdon offered a £500 prize for creation of a safe portable mining lamp.
1885 Thomas Evans of Aberdare made a Clanny type of safety lamp
1886 Royal Commission on Accidents in Mines tested lamps and made recommendations
1887 Coal Mines Regulation Act – requirements on construction, examination, where used, etc.
1889 John Davis and Co, Derby, were supplying portable electric lamps
1896 Coal Mines Regulation Act - requirements on provision by mine owners, where to be used, etc.
1909 Cap (helmet) lamps introduced in Scotland
1911 Prize offered for best electrical lamp
1911 Coal Mines Act made requirements for pit managers to take examinations, where can be used (including electrical), etc.
1920 Electrical lamp with built in accumulator
1924 Miners Lamp Committee – tests and recommendations
1950 Shale miner's electric safety cap lamp and battery pack made in England and supplied by Concordia Electric Safety Lamp Company Ltd, Cardiff.
Variants
Carbide lamp, a lamp that produces and burns acetylene
Safety lamp, any of several types of lamp which are designed to be safe to use in coal mines
Davy lamp, a safety lamp containing a candle
Geordie lamp, a safety lamp
Wheat lamp
See also
Headlamp (outdoor)
References
Mining equipment
Types of lamp
Safety equipment
Mine safety
Coal mining | Mining lamp | [
"Engineering"
] | 472 | [
"Mining equipment"
] |
13,284,111 | https://en.wikipedia.org/wiki/Wu%27s%20method%20of%20characteristic%20set | Wenjun Wu's method is an algorithm for solving multivariate polynomial equations introduced in the late 1970s by the Chinese mathematician Wen-Tsun Wu. This method is based on the mathematical concept of characteristic set introduced in the late 1940s by J.F. Ritt. It is fully independent of the Gröbner basis method, introduced by Bruno Buchberger (1965), even if Gröbner bases may be used to compute characteristic sets.
Wu's method is powerful for mechanical theorem proving in elementary geometry, and provides a complete decision process for certain classes of problem. It has been used in research in his laboratory (KLMM, Key Laboratory of Mathematics Mechanization in Chinese Academy of Science) and around the world. The main trends of research on Wu's method concern systems of polynomial equations of positive dimension and differential algebra where Ritt's results have been made effective. Wu's method has been applied in various scientific fields, like biology, computer vision, robot kinematics and especially automatic proofs in geometry.
Informal description
Wu's method uses polynomial division to solve problems of the form:
where f is a polynomial equation and I is a conjunction of polynomial equations. The algorithm is complete for such problems over the complex domain.
The core idea of the algorithm is that you can divide one polynomial by another to give a remainder. Repeated division results in either the remainder vanishing (in which case the I implies f statement is true), or an irreducible remainder is left behind (in which case the statement is false).
More specifically, for an ideal I in the ring k[x1, ..., xn] over a field k, a (Ritt) characteristic set C of I is composed of a set of polynomials in I, which is in triangular shape: polynomials in C have distinct main variables (see the formal definition below). Given a characteristic set C of I, one can decide if a polynomial f is zero modulo I. That is, the membership test is checkable for I, provided a characteristic set of I.
Ritt characteristic set
A Ritt characteristic set is a finite set of polynomials in triangular form of an ideal. This triangular set satisfies
certain minimal condition with respect to the Ritt ordering, and it preserves many interesting geometrical properties
of the ideal. However it may not be its system of generators.
Notation
Let R be the multivariate polynomial ring k[x1, ..., xn] over a field k.
The variables are ordered linearly according to their subscript: x1 < ... < xn.
For a non-constant polynomial p in R, the greatest variable effectively presenting in p, called main variable or class, plays a particular role:
p can be naturally regarded as a univariate polynomial in its main variable xk with coefficients in k[x1, ..., xk−1].
The degree of p as a univariate polynomial in its main variable is also called its main degree.
Triangular set
A set T of non-constant polynomials is called a triangular set if all polynomials in T have distinct main variables. This generalizes triangular systems of linear equations in a natural way.
Ritt ordering
For two non-constant polynomials p and q, we say p is smaller than q with respect to Ritt ordering and written as p <r q, if one of the following assertions holds:
(1) the main variable of p is smaller than the main variable of q, that is, mvar(p) < mvar(q),
(2) p and q have the same main variable, and the main degree of p is less than the main degree of q, that is, mvar(p) = mvar(q) and mdeg(p) < mdeg(q).
In this way, (k[x1, ..., xn],<r) forms a well partial order. However, the Ritt ordering is not a total order:
there exist polynomials p and q such that neither p <r q nor p >r q. In this case, we say that p and q are not comparable.
The Ritt ordering is comparing the rank of p and q. The rank, denoted by rank(p), of a non-constant polynomial p is defined to be a power of
its main variable: mvar(p)mdeg(p) and ranks are compared by comparing first the variables and then, in case of equality of the variables, the degrees.
Ritt ordering on triangular sets
A crucial generalization on Ritt ordering is to compare triangular sets.
Let T = { t1, ..., tu} and S = { s1, ..., sv} be two triangular sets
such that polynomials in T and S are sorted increasingly according to their main variables.
We say T is smaller than S w.r.t. Ritt ordering if one of the following assertions holds
there exists k ≤ min(u, v) such that rank(ti) = rank(si) for 1 ≤ i < k and tk <r sk,
u > v and rank(ti) = rank(si) for 1 ≤ i ≤ v.
Also, there exists incomparable triangular sets w.r.t Ritt ordering.
Ritt characteristic set
Let I be a non-zero ideal of k[x1, ..., xn]. A subset T of I is a Ritt characteristic set of I if one of the following conditions holds:
T consists of a single nonzero constant of k,
T is a triangular set and T is minimal w.r.t Ritt ordering in the set of all triangular sets contained in I.
A polynomial ideal may possess (infinitely) many characteristic sets, since Ritt ordering is a partial order.
Wu characteristic set
The Ritt–Wu process, first devised by Ritt, subsequently modified by Wu, computes not a Ritt characteristic but an extended one, called Wu characteristic set or ascending chain.
A non-empty subset T of the ideal generated by F is a Wu characteristic set of F if one of the following condition holds
T = {a} with a being a nonzero constant,
T is a triangular set and there exists a subset G of such that = and every polynomial in G is pseudo-reduced to zero with respect to T.
Wu characteristic set is defined to the set F of polynomials, rather to the ideal generated by F. Also it can be shown that a Ritt characteristic set T of is a Wu characteristic set of F. Wu characteristic sets can be computed by Wu's algorithm CHRST-REM, which only requires pseudo-remainder computations and no factorizations are needed.
Wu's characteristic set method has exponential complexity; improvements in computing efficiency by weak chains, regular chains, saturated chain were introduced
Decomposing algebraic varieties
An application is an algorithm for solving systems of algebraic equations by means of characteristic sets. More precisely, given a finite subset F of polynomials, there is an algorithm to compute characteristic sets T1, ..., Te such that:
where W(Ti) is the difference of V(Ti) and V(hi), here hi is the product of initials of the polynomials in Ti.
See also
Regular chain
Mathematics-Mechanization Platform
References
P. Aubry, M. Moreno Maza (1999) Triangular Sets for Solving Polynomial Systems: a Comparative Implementation of Four Methods. J. Symb. Comput. 28(1–2): 125–154
David A. Cox, John B. Little, Donal O'Shea. Ideals, Varieties, and Algorithms. 2007.
Ritt, J. (1966). Differential Algebra. New York, Dover Publications.
Dongming Wang (1998). Elimination Methods. Springer-Verlag, Wien, Springer-Verlag
Dongming Wang (2004). Elimination Practice, Imperial College Press, London
Wu, W. T. (1984). Basic principles of mechanical theorem proving in elementary geometries. J. Syst. Sci. Math. Sci., 4, 207–35
Wu, W. T. (1987). A zero structure theorem for polynomial equations solving. MM Research Preprints, 1, 2–12
External links
wsolve Maple package
The Characteristic Set Method
Computer algebra
Algebraic geometry
Commutative algebra
Polynomials | Wu's method of characteristic set | [
"Mathematics",
"Technology"
] | 1,717 | [
"Polynomials",
"Computer algebra",
"Computational mathematics",
"Fields of abstract algebra",
"Computer science",
"Algebraic geometry",
"Commutative algebra",
"Algebra"
] |
13,284,409 | https://en.wikipedia.org/wiki/Error%20guessing | In software testing, error guessing is a test method in which test cases used to find bugs in programs are established based on experience in prior testing. The scope of test cases usually rely on the software tester involved, who uses experience and intuition to determine what situations commonly cause software failure, or may cause errors to appear. Typical errors include divide by zero, null pointers, or invalid parameters.
Error guessing has no explicit rules for testing; test cases can be designed depending on the situation, either drawing from functional documents or when an unexpected/undocumented error is found while testing operations.
References
Software testing
Computer programming
kk:Бағдарламалық тестілеу | Error guessing | [
"Technology",
"Engineering"
] | 137 | [
"Software engineering",
"Computer programming",
"Software testing",
"Computers"
] |
13,284,467 | https://en.wikipedia.org/wiki/Lule%C3%A5%20algorithm | The Luleå algorithm of computer science, designed by , is a technique for storing and searching internet routing tables efficiently. It is named after the Luleå University of Technology, the home institute/university of the technique's authors. The name of the algorithm does not appear in the original paper describing it, but was used in a message from Craig Partridge to the Internet Engineering Task Force describing that paper prior to its publication.
The key task to be performed in internet routing is to match a given IPv4 address (viewed as a sequence of 32 bits) to the longest prefix of the address for which routing information is available. This prefix matching problem may be solved by a trie, but trie structures use a significant amount of space (a node for each bit of each address) and searching them requires traversing a sequence of nodes with length proportional to the number of bits in the address. The Luleå algorithm shortcuts this process by storing only the nodes at three levels of the trie structure, rather than storing the entire trie.
Before building the Luleå trie, the routing table entries need to be preprocessed. Any bigger prefix that overlaps a smaller prefix must be repeatedly split into smaller prefixes, and only the split prefixes which does not overlap the smaller prefix is kept. It is also required that the prefix tree is complete. If there is no routing table entries for the entire address space, it must be completed by adding dummy entries, which only carries the information that no route is present for that range. This enables the simplified lookup in the Luleå trie ().
The main advantage of the Luleå algorithm for the routing task is that it uses very little memory, averaging 4–5 bytes per entry for large routing tables. This small memory footprint often allows the entire data structure to fit into the routing processor's cache, speeding operations. However, it has the disadvantage that it cannot be modified easily: small changes to the routing table may require most or all of the data structure to be reconstructed.
A modern home-computer (PC) has enough hardware/memory to perform the algorithm.
First level
The first level of the data structure consists of
A bit vector consisting of 216 = 65,536 bits, with one entry for each 16-bit prefix of an IPv4 address. A bit in this table is set to one if there is routing information associated with that prefix or with a longer sequence beginning with that prefix, or if the given prefix is the first one associated with routing information at some higher level of the trie; otherwise it is set to zero.
An array of 16-bit words for each nonzero bit in the bit vector. Each datum either supplies an index that points to the second-level data structure object for the corresponding prefix, or supplies the routing information for that prefix directly.
An array of "base indexes", one for each consecutive subsequence of 64 bits in the bit vector, pointing to the first datum associated with a nonzero bit in that subsequence.
An array of "code words", one for each consecutive subsequence of 16 bits in the bit vector. Each code word is 16 bits, and consists of a 10-bit "value" and a 6-bit "offset". The sum of the offset and the associated base index gives a pointer to the first datum associated with a nonzero bit in the given 16-bit subsequence. The 10-bit value gives an index into a "maptable" from which the precise position of the appropriate datum can be found.
A maptable. Because the prefix tree is required to be complete, there can only exist a limited amount of possible 16-bit bitmask values in the bit vector, 678. The maptable rows correspond to these 678 16-bit combinations, and columns the number of set bits in the bitmask at the bit location corresponding to the column, minus 1. So column 6 for the bitmask 1010101010101010 would have the value 2. The maptable is constant for any routing table contents.
To look up the datum for a given address x in the first level of the data structure, the Luleå algorithm computes three values:
the base index at the position in the base index array indexed by the first 10 bits of x
the offset at the position in the code word array indexed by the first 12 bits of x
the value in maptable[y][z], where y is the maptable index from the code word array and z is bits 13–16 of x
The sum of these three values provides the index to use for x in the array of items.
Second and third levels
The second and third levels of the data structure are structured similarly to each other; in each of these levels the Luleå algorithm must perform prefix matching on 8-bit quantities (bits 17–24 and 25–32 of the address, respectively). The data structure is structured in "chunks", each of which allows performing this prefix matching task on some subsequence of the address space; the data items from the first level data structure point to these chunks.
If there are few enough different pieces of routing information associated with a chunk, the chunk just stores the list of these routes, and searches through them by a single step of binary search followed by a sequential search. Otherwise, an indexing technique analogous to that of the first level is applied.
Notes
References
.
.
.
.
Internet architecture
Routing software
Networking algorithms
Routing algorithms
Luleå University of Technology | Luleå algorithm | [
"Technology"
] | 1,141 | [
"Internet architecture",
"IT infrastructure"
] |
13,285,352 | https://en.wikipedia.org/wiki/Chilling%20requirement | The chilling requirement of a fruit is the minimum period of cold weather after which a fruit-bearing tree will blossom. It is often expressed in chill hours, which can be calculated in different ways, all of which essentially involve adding up the total amount of time in a winter spent at certain temperatures.
Some bulbs have chilling requirements to bloom, and some seeds have chilling requirements to sprout.
Biologically, the chilling requirement is a way of ensuring that vernalization occurs.
Chilling units or chilling hours
A chilling unit in agriculture is a metric of a plant's exposure to chilling temperatures. Chilling temperatures extend from freezing point to, depending on the model, or even . Stone fruit trees and certain other plants of temperate climate develop next year's buds in the summer. In the autumn the buds become dormant, and the switch to proper, healthy dormancy is triggered by a certain minimum exposure to chilling temperatures. Lack of such exposure results in delayed and substandard foliation, flowering and fruiting. One chilling unit, in the simplest models, is equal to one hour's exposure to the chilling temperature; these units are summed up for a whole season. Advanced models assign different weights to different temperature bands.
Requirements
According to Fishman, chilling in trees acts in two stages. The first is reversible: chilling helps to build up the precursor to dormancy, but the process can be easily reversed with a rise in temperature. After the level of precursor reaches a certain threshold, dormancy becomes irreversible and will not be affected by short-term warm temperature peaks. Apples have the highest chilling requirements of all fruit trees, followed by apricots and, lastly, peaches. Apple cultivars have a diverse range of permissible minimum chilling: most have been bred for temperate weather, but Gala and Fuji can be successfully grown in subtropical Bakersfield, California.
Peach cultivars in Texas range in their requirements from 100 chilling units (Florida Grande cultivar, zoned for low chill regions) to 1,000 units (Surecrop, zoned for high chill regions). Planting a low-chilling cultivar in a high-chill region risks loss of a year's harvest when an early bloom is hit by a spring frost. A high-chilling cultivar planted in a low-chill region will, quite likely, never fruit at all. A four-year study of Ruston Red Alabama peach, which has a threshold of 850 chilling units, demonstrated that a seasonal chilling deficiency of less than 50 units has no effect on harvest. Deficiency of 50 to 100 units may result in loss of up to 50% of expected harvest. Deficiency of 250 hours and more is a sure loss of practically whole harvest; the few fruit will be of very poor quality and have no market value. Rest-breaking agents (e.g. hydrogen cyanamide, trade name BudPro or Dormex), applied in spring, can partially mitigate the effects of insufficient chilling. BudPro can substitute for up to 300 hours of chilling, but an excessive spraying and timing error can easily damage the buds.
Other products such as Dormex use stabilizing compounds.
Chilling of orange trees has two effects. First, it increases production of carotenoids and decreases chlorophyll content of the fruit, improving their appearance and, ultimately, their market value. Second, the "quasi-dormancy" experienced by orange trees triggers concentrated flowering in spring, as opposed to more or less uniform round-the-year flowering and fruiting in warmer climates.
Biennial plants like cabbage, sugar beet, celery and carrots need chilling to develop second-year flowering buds. Excessive chilling in the early stages of a sugar beet seedling, on the contrary, may trigger undesired growth of a flowering stem (bolting) in its first year. This phenomenon has been offset by breeding sugar beet cultivars with a higher minimum chilling threshold. Such cultivars can be seeded earlier than normal without the risk of bolting.
Models
All models require hourly recording of temperatures. The simplest model assigns one chilling unit for every full hour at temperatures below . A slightly more sophisticated model excludes freezing temperatures, which do not contribute to proper dormancy cycle, and counts only hours with temperatures between and .
The Utah model assigns different weight to different temperature bands; a full unit per hour is assigned only to temperatures between and . Maximum effect is achieved at . Temperatures between and (the threshold between chilling and warm weather) have zero weight, and higher temperature have negative weights: they reduce the beneficial effects of an already accumulated chilling hours.
Southwick et al. wrote that neither of these models is accurate enough to account for application of rest-breaking agents widely used in modern farming. They advocated the use of a dynamic model tailored to the two-stage explanation of dormancy.
References
External links
Harvest Prediction Model for the counties and towns of California . University of California Agricultural and Natural Resources.
gardenweb discussion of chilling requirements
permaculture discussion of chilling requirements
chill accumulation calculator for wunderground.com weather stations
Horticulture
Meteorological indices
Plant physiology | Chilling requirement | [
"Biology"
] | 1,045 | [
"Plant physiology",
"Plants"
] |
13,285,816 | https://en.wikipedia.org/wiki/Potassium%20canrenoate | Potassium canrenoate (INN, JAN) or canrenoate potassium (USAN) (brand names Venactone, Soldactone), also known as aldadiene kalium, the potassium salt of canrenoic acid, is an aldosterone antagonist of the spirolactone group. Like spironolactone, it is a prodrug, and is metabolized to active canrenone in the body.
Potassium canrenoate is notable in that it is the only clinically used antimineralocorticoid which is available for parenteral administration (specifically intravenous) as opposed to oral administration.
In the UK, it is unlicensed and only used for short term diuresis in oedema or heart failure in neonates or children under specialist initiation and monitoring.
See also
Canrenoic acid
Canrenone
References
11β-Hydroxylase inhibitors
Antimineralocorticoids
CYP17A1 inhibitors
Pregnanes
Potassium compounds
Prodrugs
Progestogens
Spirolactones
Steroidal antiandrogens | Potassium canrenoate | [
"Chemistry"
] | 229 | [
"Chemicals in medicine",
"Prodrugs"
] |
13,285,821 | https://en.wikipedia.org/wiki/Canrenone | Canrenone, sold under the brand names Contaren, Luvion, Phanurane, and Spiroletan, is a steroidal antimineralocorticoid of the spirolactone group related to spironolactone which is used as a diuretic in Europe, including in Italy and Belgium. It is also an important active metabolite of spironolactone, and partially accounts for its therapeutic effects.
Medical uses
Canrenone has been found to be effective in the treatment of hirsutism in women.
Heart failure
Two studies of canrenone in people with heart failure have shown a mortality benefit compared to placebo. In the evaluation which studied people with chronic heart failure (CHF), people that were treated with canrenone displayed a lower number of deaths compared to the placebo group, indicating a death and morbidity benefit of the medication.
One study compared 166 treated with canrenone to 336 given conventional therapy lasting 10 years. Differences in systolic and diastolic blood pressure was observed between both patient groups where, patients treated with canrenone, showed a lower blood pressure compared to conventional therapy. Uric acid was lower in the group treated with canrenone; however, no differences were seen in potassium, sodium, and brain natriuretic peptide (BNP) levels. Left ventricular mass was also lower in the group treated with canrenone and a greater progression of NYHA class was observed in the control group compared to patients treated with canrenone.
Another study concluded that treatment with canrenone in patients with chronic heart failure improves diastolic function and further decreased BNP levels.
Pharmacology
Pharmacodynamics
Canrenone is reportedly more potent as an antimineralocorticoid relative to spironolactone, but is considerably less potent and effective as an antiandrogen. Similarly to spironolactone, canrenone inhibits steroidogenic enzymes such as 11β-hydroxylase, cholesterol side-chain cleavage enzyme, 17α-hydroxylase, 17,20-lyase, and 21-hydroxylase, but once again, is comparatively less potent in doing so.
Pharmacokinetics
The elimination half-life of canrenone is about 16.5 hours.
As a metabolite
Canrenone is an active metabolite of spironolactone, canrenoic acid, and potassium canrenoate, and is considered to be partially responsible for their effects. It has been found to have approximately 10 to 25% of the potassium-sparing diuretic effect of spironolactone, whereas another metabolite, 7α-thiomethylspironolactone (7α-TMS), accounts for around 80% of the potassium-sparing effect of the drug.
History
Canrenone was described and characterized in 1959. It was introduced for medical use, in the form of potassium canrenoate (the potassium salt of canrenoic acid), by 1968.
Society and culture
Generic names
Canrenone is the and of the drug.
Brand names
Canrenone has been marketed under the brand names Contaren, Luvion, Phanurane, and Spiroletan, among others.
Availability
Canrenone appears to remain available only in Italy, although potassium canrenoate remains marketed in various other countries as well.
See also
Canrenoic acid
Potassium canrenoate
References
11β-Hydroxylase inhibitors
21-Hydroxylase inhibitors
Antimineralocorticoids
Cholesterol side-chain cleavage enzyme inhibitors
CYP17A1 inhibitors
Diuretics
Human drug metabolites
Lactones
Pregnanes
Progestogens
Spiro compounds
Spirolactones
Spironolactone
Steroidal antiandrogens
World Anti-Doping Agency prohibited substances
Conjugated dienes
Enones | Canrenone | [
"Chemistry"
] | 821 | [
"Organic compounds",
"Chemicals in medicine",
"Human drug metabolites",
"Spiro compounds"
] |
13,286,310 | https://en.wikipedia.org/wiki/Printing%20and%20writing%20paper | Printing and writing papers are paper grades used for newspapers, magazines, catalogs, books, notebooks, commercial printing, business forms, stationeries, copying and digital printing. About 1/3 of the total pulp and paper marked (in 2000) is printing and writing papers. The pulp or fibers used in printing and writing papers are extracted from wood using a chemical or mechanical process.
Paper standards
The most common paper size in office use is US letter in the US, and A4 where the ISO paper series are in use. A4 ("metric") paper is easier to obtain in the US than US letter can be had elsewhere..
The ISO 216:2007 is the current international standard for paper sizes, including writing papers and some types of printing papers. This standard describes the paper sizes under what the ISO calls the A, B, and C series formats.
Not all countries follow ISO 216. North America, for instance, uses certain terms to describe paper sizes, such as Letter, Legal, Junior Legal, and Ledger or Tabloid.
Most types of printing papers also do not follow ISO standards but have features that conform with leading industry standards. These include, among others, ink adhesion, light sensitivity, waterproofing, compatibility with thermal or PSA overlaminate, and glossy or matte finish.
Additionally, the American National Standards Institute or ANSI also defined a series of paper sizes, with size A being the smallest and E the largest. These paper sizes have aspect ratios 1:1.2941 and 1:1.5455.
Vietnam
Types
Fine paper
Machine finished coated paper
Newsprint
History
The history of paper is often attributed to the Han dynasty (25-220 AD) when Cai Lun, a Chinese court official and inventor, made paper sheets using the “bark of trees, remnants of hemp, rags of cloth, and fishing nets.” Cai Lun's method of papermaking received praise during his time for offering a more convenient alternative to writing on silk or bamboo tablets, which were the traditional materials in ancient Chinese writing.
On the other hand, archeological evidence supports that the ancient Chinese military had used paper over a hundred years before Cai Lun's contribution and that maps from early 2nd century BCE were also made with paper. With this, it appears that what Cai Lun accomplished is not an invention but an improvement in the papermaking process. Today, even with the presence of modern tools and machines for papermaking, most processes still involve the traditional steps that Cai Lun employed, namely the process of soaking felted fiber sheets in water, draining the water, and then drying the fiber into thin sheets.
In 1690, the first paper mill in America was established by William Rittenhouse. The mill became the largest manufacturer of paper in America for over a hundred years until other paper mills sprang up, including the paper mill by William Bradford which supplied paper to the New York Gazette.
References
Paper
Materials | Printing and writing paper | [
"Physics"
] | 594 | [
"Materials",
"Matter"
] |
13,288,599 | https://en.wikipedia.org/wiki/Trisonic%20Wind%20Tunnel | A Trisonic Wind Tunnel (TWT) is a wind tunnel so named because it is capable of testing in three speed regimes – subsonic, transonic, and supersonic. The earliest known trisonic wind tunnel was dated to 1950 and was located in El Segundo, California before it closed in 2007. Other trisonic wind tunnels currently in operation are those located at NASA's Marshall Space Flight Center, National Researach Council Canada's 1.5 m Trisonic Wind Tunnel Research Facility and the French-German Research Institute of Saint-Louis, ISRO's Vikram Sarabhai Space Centre (VSSC) in Thiruvananthapuram, and 1.2m Trisonic Wind Tunnel Facility at National Aerospace Laboratories.
El Segundo Trisonic Wind Tunnel
The El Segundo Trisonic Wind Tunnel or North American Trisonic Wind Tunnel (NATWT) was a wind tunnel that was located in El Segundo, California. It was built by North American Aviation in the 1950s. The tunnel had a maximum testing speed of Mach 3.5.
The NATWT was a blow-down type tunnel. In contrast to a continuous wind tunnel, a blow-down wind tunnel only provides air for short period. A continuous wind tunnel is driven by large fans and typically is only capable of subsonic speeds. Because a blow-down tunnel can build up pressure over a long period time, it can release air at faster speeds.
The NATWT used two Westinghouse motors, totaling 10,000 hp and consuming 8 megawatts of electricity, that drove two compressors. NATWT had its own substation to supply its high electrical demand. During the hot summer season, NATWT ran on a night schedule to balance its load with public air conditioning.
The compressors pressurized eight large spheres totaling . These spheres were connected to a single manifold that connected to a valve mechanism. When the valve was opened, the compressed air passed through the settling chamber, nozzle, and the test section, where instrumented aerodynamic models were mounted. A diffusing area that expanded in size slowed the air before it was exhausted vertically into the atmosphere. The diffuser area included a colander-like sieve made of steel to catch debris in the event of a catastrophic model failure.
The speed of the air was determined by the pressure of the spheres and the cross sectional area of the wind tunnel nozzle and diffuser. A smaller cross section in the nozzle caused the air to move faster. The NATWT could change the shape of the nozzle by operating a series of hydraulic pistons that would bend one-inch thick steel plates into the desired contour.
A distinguishing feature of the NATWT was the size of its test section []. Unlike most blow-down wind tunnels, the NATWT test section had a so-called "walk in" test section that could accommodate very large aerodynamic models. Large models have several advantages:
ability to model relatively small features, such as vortex generators
ability to instrument the model with more pressure probes and sensors
more surface area enabling more pressure sensors
more interior space for instrumentation
Because of the "walk in" nature of NATWT, the tunnel was designed with the possibility that someone could accidentally be locked in the tunnel. Two large emergency safety switches were provided. One was located at the test section, the other at the diffuser area. When either of these safety switches were activated, the valve could not be opened.
Another feature of NATWT was the ability to visualize airflow over a model surface. By using optics built into the test section, an engineer could view air disturbance patterns as they were occurring during a test.
History
When Rockwell International purchased North American Aviation, it also gained ownership of the NATWT. The NATWT was then gifted to the University of California, Los Angeles (UCLA) in 1998, with the intention of NATWT becoming a university research facility. It became known as the Micro Craft Trisonic Wind Tunnel. In 2007, UCLA decided to close the trisonic wind tunnel, citing environmental issues.
The last test to be conducted at TWT was completed on August 28, 2007. It was designated as test TWT 807. TWT was demolished in 2009.
References
Sources
El Segundo Wind Tunnel, 9/2007
Wind tunnels
El Segundo, California
Fluid dynamics
Buildings and structures in Los Angeles County, California | Trisonic Wind Tunnel | [
"Chemistry",
"Engineering"
] | 886 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
13,288,679 | https://en.wikipedia.org/wiki/Dividing%20engine | A dividing engine is a device employed to mark graduations on measuring instruments.
History
There has always been a need for accurate measuring instruments. Whether it is a linear device such as a ruler or vernier or a circular device such as a protractor, astrolabe, sextant, theodolite, or setting circles for astronomical telescopes, the desire for ever greater precision has always existed. For every improvement in the measuring instruments, such as better alidades or the introduction of telescopic sights, the need for more exact graduations immediately followed.
In early instruments, graduations were typically etched or scribed lines in wood, ivory or brass. Instrument makers devised various devices to perform such tasks. Early Islamic instrument makers must have had techniques for the fine division of their instruments, as this accuracy is reflected in the accuracy of the readings they made. This skill and knowledge seems to have been lost, given that small quadrants and astrolabes in the 15th and 16th centuries did not show fine graduations and were relatively roughly made.
In the 16th century, European instrument makers were hampered by the materials available. Brass was in hammered sheets with rough surfaces and iron graving tools were poor quality. There were not enough makers to have created a long tradition of practice and few were trained by masters.
Transversals set a standard in the early 14th century. Tycho Brahe used transversals on his instruments and made the method better known. Transversals based on straight lines do not provide correct subdivisions on an arc, so other methods, such as those based on the use of circular arcs as developed by Philippe de La Hire, were also used.
Another system was created in the 16th century by Pedro Nunes and was called nonius after him. It consisted of tracing a certain number of concentric circles on an instrument and dividing each successive one with one fewer divisions than the adjacent outer circle. Thus the outermost quadrant would have 90° in 90 equal divisions, the next inner would have 89 divisions, the next 88 and so on. When an angle was measured, the circle and the division on which the alidade fell was noted. A table was then consulted to provide the exact measure. However, this system was difficult to construct and used by few. Tycho Brahe was one exception.
Some improvements to Nunes' system were developed by Christopher Clavius and Jacob Curtius. Curtius' work led directly to that of Pierre Vernier, published in 1631. Vernier refined this process and gave us the vernier scale. However, though these various techniques improved the reading of graduations, they did not contribute directly to the accuracy of their construction. Further improvements came slowly, and a new development was required: the dividing engine.
Prior work on the development of gear cutting machines had prepared the way. Such devices were required to cut a circular plate with uniform gear teeth. Clockmakers were familiar with these methods and they were important in developing dividing engines. George Graham devised a process of using geometric methods to divide the limb of an instrument. He developed a sophisticated beam compass to aid marking of the graduations. John Bird and Jeremiah Sisson followed on with these techniques. These beam compass techniques were used into the 19th century, as the dividing engines that followed did not scale up to the largest instruments being constructed.
The first true circular dividing engine was probably constructed by Henry Hindley, a clockmaker, around 1739. This was reported to the Royal Society by John Smeaton in 1785. It was based directly on a gear cutting machine for clockworks. It used a toothed index plate and a worm gear to advance the mechanism. Duc de Chaulnes created two dividing engines between 1765 and 1768 for dividing circular arcs and linear scales. He desired to improve on the graduation of instruments by removing the skill of the maker from the technique where possible. While beam compass use was critically dependent on the skill of the user, his machine produced more regular divisions by virtue of its design. His machines were also inspired by the prior work of the clockmakers.
Jesse Ramsden followed duc de Chaulnes by five years in the production of his dividing engine. As with the prior inventions, Ramsden's used a tangent screw mechanism to advance the machine from one position to another. However, he had developed a screw-cutting lathe that was particularly advanced and produced a superior product. This engine was developed with funding from the Board of Longitude on condition that it be described in detail (along with the related screw-cutting lathe) and not be protected by patent. This allowed others to freely copy the device and improve on it. In fact, the Board required that he teach others to construct their own copies and make his dividing engine available to graduate instruments made by others.
Refinements
Edward Troughton was the first to build a copy of the Ramsden design. He enhanced the design and produced his own version. This permitted an improvement in the accuracy of the dividing engine.
Samuel Rhee developed his own endless screw cutting machine and was able to sell machines to others. His screws were considered the finest available at the time.
In France, Étienne Lenoir created a dividing engine of greater accuracy than the English version. Mégnié, Richer, Fortin and Jecker had also built dividing engines of considerable quality.
By the beginning of the 19th century, it was possible to make instruments such as the sextant that remained fully serviceable and of sufficient accuracy to be in use for a half century or more.
The dividing engine was unique among developments in the manufacture of scientific instruments, as it was immediately accepted by all makers. There was no uncertainty in the value of this development.
Bryan Donkin designed and built a screw cutting and dividing engine lathe in 1826, which set new standards of precision for the creation of accurate leadscrews, a necessary precursor to the development of precision machining in the Industrial Revolution.
See also
Henry Joseph Grayson - an Australian inventor who developed an engine (~1900) for making diffraction gratings that ruled 120,000 lines to the inch (approximately 4,700 per mm).
References
External links
Historical scientific instruments
Dimensional instruments
Astronomical instruments | Dividing engine | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,249 | [
"Dimensional instruments",
"Physical quantities",
"Quantity",
"Size",
"Astronomical instruments"
] |
13,289,313 | https://en.wikipedia.org/wiki/Souders%E2%80%93Brown%20equation | In chemical engineering, the Souders–Brown equation (named after Mott Souders and George Granger Brown) has been a tool for obtaining the maximum allowable vapor velocity in vapor–liquid separation vessels (variously called flash drums, knockout drums, knockout pots, compressor suction drums and compressor inlet drums). It has also been used for the same purpose in designing trayed fractionating columns, trayed absorption columns and other vapor–liquid-contacting columns.
A vapor–liquid separator drum is a vertical vessel into which a liquid and vapor mixture (or a flashing liquid) is fed and wherein the liquid is separated by gravity, falls to the bottom of the vessel, and is withdrawn. The vapor travels upward at a design velocity which minimizes the entrainment of any liquid droplets in the vapor as it exits the top of the vessel.
Use
The diameter of a vapor–liquid separator drum is dictated by the expected volumetric flow rate of vapor and liquid from the drum. The following sizing methodology is based on the assumption that those flow rates are known.
Use a vertical pressure vessel with a length–diameter ratio of about 3 to 4, and size the vessel to provide about 5 minutes of liquid inventory between the normal liquid level and the bottom of the vessel (with the normal liquid level being somewhat below the feed inlet).
Calculate the maximum allowable vapor velocity in the vessel by using the Souders–Brown equation:
where
is the maximum allowable vapor velocity in m/s
is the liquid density in kg/m
is the vapor density in kg/m
= 0.107 m/s (when the drum includes a de-entraining mesh pad)
Then the cross-sectional area of the drum can be found from:
where
is the vapor volumetric flow rate in m/s
is the cross-sectional area of the drum
And the drum diameter is:
The drum should have a vapor outlet at the top, liquid outlet at the bottom, and feed inlet at about the half-full level. At the vapor outlet, provide a de-entraining mesh pad within the drum such that the vapor must pass through that mesh before it can leave the drum. Depending upon how much liquid flow is expected, the liquid outlet line should probably have a liquid level control valve.
As for the mechanical design of the drum (materials of construction, wall thickness, corrosion allowance, etc.) use the same criteria as for any pressure vessel.
Recommended values of k
The GPSA Engineering Data Book recommends the following values for vertical drums with horizontal mesh pads (at the denoted operating pressures):
At a gauge pressure of 0 bar: 0.107 m/s
At a gauge pressure of 7 bar: 0.107 m/s
At a gauge pressure of 21 bar: 0.101 m/s
At a gauge pressure of 42 bar: 0.092 m/s
At a gauge pressure of 63 bar: 0.083 m/s
At a gauge pressure of 105 bar: 0.065 m/s
GPSA notes:
= 0.107 at a gauge pressure of 7 bar. Subtract 0.003 for every 7 bar above a gauge pressure of 7 bar.
For glycol or amine solutions, multiply above values by 0.6 – 0.8
Typically use one-half of the above values for approximate sizing of vertical separators without mesh pads
For compressor suction scrubbers and expander inlet separators, multiply by 0.7 – 0.8
See also
Demister
References
Equations
Gas-liquid separation | Souders–Brown equation | [
"Chemistry",
"Mathematics"
] | 729 | [
"Equations",
"Mathematical objects",
"Gas-liquid separation",
"Separation processes by phases"
] |
13,289,588 | https://en.wikipedia.org/wiki/Foundation%20integrity%20testing | Foundation integrity testing is the non-destructive testing of piled foundations. It was first used in the late 1960s, and has been developed over time by many companies. Three organizations supply a majority of the test equipment in use: CEBTP (Centre Expérimental de Recherches et d'Etudes du Bâtiment et des Travaux Publics) in Europe; Integrity Testing in Asia and Australia: and by GRL in the USA.
References
Bridges | Foundation integrity testing | [
"Engineering"
] | 92 | [
"Structural engineering",
"Bridges"
] |
13,289,657 | https://en.wikipedia.org/wiki/XHTML%2BMathML%2BSVG | XHTML+MathML+SVG is a W3C standard that describes an integration of MathML and Scalable Vector Graphics semantics with XHTML and Cascading Style Sheets. It is categorized as "obsolete" on the W3C's HTML Current Status page.
References
External links
W3C Working Draft
World Wide Web Consortium standards | XHTML+MathML+SVG | [
"Technology"
] | 73 | [
"Computing stubs",
"World Wide Web stubs"
] |
13,290,399 | https://en.wikipedia.org/wiki/Expanded%20clay%20aggregate | Lightweight expanded clay aggregate (LECA) or expanded clay (exclay) is a lightweight aggregate made by heating clay to around in a rotary kiln. The heating process causes gases trapped in the clay to expand, forming thousands of small bubbles and giving the material a porous structure. LECA has an approximately round or oblong shape due to circular movement in the kiln and is available in different sizes and densities. LECA is used to make lightweight concrete products and other uses.
History
LECA was developed about 1917 in Kansas City, Missouri, to the production in a rotary kiln of a patented expanded aggregate known as Haydite which was used in the construction of SS Selma, an ocean-going ship launched in 1919. Following in the USA was the development of a series of aggregates known as Gravelite, Perlite, Rocklite, etc. In Europe, LECA commenced in Denmark, Germany, Netherlands, and UK.
Characteristics
LECA is usually produced in different sizes and densities from up to , commonly 0–4 mm, 4–10 mm, 10–25 mm and densities of 250, 280, 330, and 510 kg/m3. LECA boulder is the biggest size of LECA with 100–500 mm size and 500 kg/m3 density.
Some characteristics of LECA are lightness, thermal insulation by low thermal conductivity coefficient (as low as 0.097 W/mK), soundproofing by high acoustic insulation, moisture impermeability, being incompressible under permanent pressure and gravity loads, not decomposing in severe conditions, fire resistance, a pH of nearly 7, freezing and melting resistance, easy movement and transportation, lightweight backfill and finishing, reduction of construction dead load and earthquake lateral load, being perfect sweet soil for plants, and as a material for drainage and filtration.
Uses
Common uses are in concrete blocks, concrete slabs, geotechnical fillings, lightweight concrete, water treatment, hydroponics, aquaponics and hydroculture.
LECA is a versatile material and is utilized in an increasing number of applications. In the construction industry, it is used extensively in the production of lightweight concrete, blocks and precast or incast structural elements (panels, partitions, bricks and light tiles). LECA used in structural backfill against foundations, retaining walls, bridge abutments etc., in addition, it can reduce earth pressure by 75% compared with conventional materials, and also increases ground stability while reducing settlement and land deformation. LECA can drain the surface water and groundwater to control groundwater pressure. LECA grout can be used for flooring (finishing) and roofing with thermal and sound insulation.
LECA is also used in water treatment facilities for the filtration and purification of municipal wastewater and drinking water as well as in other filtering processes, including those for dealing with industrial wastewater and fish farms.
LECA has many uses in horticulture, agriculture, and landscapes. Using LECA helps to alter soil mechanics. Using LECA provides many benefits across horticulture. It's commonly used as a growing medium in hydroponics systems since blended with other growing mediums such as soil and peat, it can improve compaction resistance, and drainage, retain water during periods of drought, insulate roots during frost, and provide roots with increased oxygen levels promoting very vigorous growth. LECA can be mixed with heavy soil to improve its aeration and drainage.
In the horticultural practice of hydroponics, LECA is a favored medium for growing plants within; the round shape provides excellent aeration at the root level, while the LECA clay pieces themselves become saturated with water and plant food, thus giving the roots a consistent supply of both. So-called semi-hydroponics or passive hydroponics (also called "semi-hydro") is a popular, simplified method of hydroponics, most commonly utilized for houseplants and tropical species. A plant is potted in solely LECA (preferably in a container with multiple air holes on the sides and bottom, like an orchid pot) and placed into a second, sealed container, in which a "nutrient reservoir" of water and plant food is maintained. Only the very bottom of the pot needs to touch this reservoir; the LECA, being porous by nature, gradually wicks moisture and nutrients up and becomes saturated, allowing the plant to feed and drink at a consistent rate.
One of the main differences between semi-hydroponics and more advanced hydroponics is the nutrients and water--whether they are consistently being delivered or whether they are sitting beneath each plant, gradually needing to be replaced before evaporating or becoming stagnant. Other differences are elements such as the facilities, the equipment, the types of plants being grown and why, and the consistency of water and nutrient delivery. Hydroponics is often favored by farmers and growers of edible crops on a larger scale. LECA can be used successfully in these settings. Semi-hydroponics is much more popular with individual plant collectors, bearing in mind the need to flush out and replenish the plants' nutrient reservoirs periodically (approx. every 7-10 days); more advanced systems provide constantly flowing, filtered, nutrient-enhanced water over the plants' roots, which typically drains into another reservoir for recycling and reusing.
See also
Perlite
Aggregate (composite)
Vermiculite
References
Ceramic materials
Soil-based building materials
Natural materials
Sediments
Phyllosilicates
Earthworks (engineering) | Expanded clay aggregate | [
"Physics",
"Engineering"
] | 1,128 | [
"Natural materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
15,962,283 | https://en.wikipedia.org/wiki/Grave%20orb | A grave orb is a petrosphere that was put on a person's tomb. Grave orbs were made throughout Scandinavia from the Pre-Roman Iron Age until the Vendel era.
The grave orb could have been selected for its round shape or shaped by hand. They were then put in the centre of a burial site. Tumuli, stone circles and stone ships often have a reclined or raised central stone, and grave orbs derive from this practice. They were of ritual or symbolic significance.
Some grave orbs are engraved with ornaments, such as the orb at Inglinge hög or Barrow of Inglinge near Ingelstad in Småland, Sweden. Hög is from the Old Norse word haugr meaning mound or barrow.
See also
Stone balls
Stone spheres of Costa Rica
Carved stone balls of Scotland
Sources
The article 'Gravklot' in Nationalencyklopedin (1992).
Rock art in Europe
Archaeological artefact types
Prehistoric art
Prehistoric Scandinavia
Germanic archaeological artifacts
Migration Period
Stones | Grave orb | [
"Physics"
] | 209 | [
"Stones",
"Physical objects",
"Matter"
] |
15,962,940 | https://en.wikipedia.org/wiki/Rarian | Rarian is a document cataloging system (formerly known as Spoon). It manages documentation metadata, as specified by the Open Source Metadata Framework (OMF). Rarian is used by the GNOME desktop help browser, Yelp. It has replaced ScrollKeeper, as originally designed. It provides an API.
References
External links
Rarian
Open Source Metadata Framework
Freedesktop.org libraries
GNOME
KDE
Metadata | Rarian | [
"Technology"
] | 84 | [
"Metadata",
"Data"
] |
15,962,998 | https://en.wikipedia.org/wiki/Innovative%20Medicines%20Initiative | The Innovative Medicines Initiative (IMI) is a European initiative to improve the competitive situation of the European Union in the field of pharmaceutical research. The IMI is a joint initiative (public-private partnership) of the DG Research of the European Commission, representing the European Communities, and the European Federation of Pharmaceutical Industries and Associations (EFPIA). IMI is laid out as a Joint Technology Initiative within the Seventh Framework Programme. Michel Goldman was the first executive director, from September 2009 until December 2014.
The Innovative Medicines Initiative is aimed towards removing research bottlenecks in the current drug development process. The IMI Joint Technology Initiative (IMI JTI), to be implemented by the IMI Joint Undertaking is meant to address these research bottlenecks. Its €2bn budget makes it the largest biomedical public-private partnership in the world.
The funding scheme has been criticised, requiring universities to invest more money than with EU FP7 programs. Besides the non-competitive financial aspects of participation in IMI projects for academia, this criticism also discusses that intellectual property is freely flowing to industry.
The Sixth Framework Programme's research projects InnoMed AddNeuroMed and InnoMed PredTox acted as pilot projects establishing the feasibility of this particular public-private partnership. Since then, the IMI has had four funding rounds: the first call had the topic Safety, while the second call was about Efficacy. Projects for these two calls are ongoing.
The IMI 2 started in 2014 and will run until 2024, while the IMI 1 is still running. Overall budget is €3.276 billion, taken for half from the European Horizon 2020 program. Goals of that second calls are to improve clinical trials success rate, deliver clinical proof of concept, biomarkers and new medicines.
IMI-Train
In September 2014 IMI-TRAIN, an IMI/ENSO-funded education and training collaboration to support biomedical scientists and professionals, has been launched. IMI-TRAIN will serve as a collaboration platform for the currently IMI-funded education and training projects:
EMTRAIN: European Medicines Research Training Network
Eu2P: European programme in Pharmacovigilance and Pharmacoepidemiology
Pharmatrain: Pharmaceutical Medicine Training Programme
SafeSciMET: Safety Sciences Modular Education and Training
See also
International Labour Organization
World Intellectual Property Organization
References
External links
European Union health policy
European Union technology policy
Horizon 2020 projects
Joint undertakings of the European Union and European Atomic Energy Community
Pharmaceutical industry
Science and technology in Europe | Innovative Medicines Initiative | [
"Chemistry",
"Biology"
] | 512 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
15,963,716 | https://en.wikipedia.org/wiki/Pipecolic%20acid | Pipecolic acid (piperidine-2-carboxylic acid) is an organic compound with the formula HNC5H9CO2H. It is a carboxylic acid derivative of piperidine and, as such, an amino acid, although not one encoded genetically. Like many other α-amino acids, pipecolic acid is chiral, although the S-stereoisomer is more common. It is a colorless solid.
Its biosynthesis starts from lysine. CRYM, a taxon-specific protein that also binds thyroid hormones, is involved in the pipecolic acid pathway.
Medicine
It accumulates in pipecolic acidemia. Elevation of pipecolic acid can be associated with some forms of epilepsy, such as pyridoxine-dependent epilepsy.
Occurrence and reactions
Like most amino acids, pipecolic acid is a chelating agent. One complex is Cu(HNC5H9CO2)2(H2O)2.
Pipecolic acid was identified in the Murchison meteorite.
It also occurs in the leaves of the genus Myroxylon, a tree from South America.
See also
Bupivacaine
Efrapeptin
References
Alpha-Amino acids
2-Piperidinyl compounds
Secondary amino acids | Pipecolic acid | [
"Chemistry",
"Biology"
] | 272 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
15,964,561 | https://en.wikipedia.org/wiki/Forensic%20materials%20engineering | Forensic materials engineering, a branch of forensic engineering, focuses on the material evidence from crime or accident scenes, seeking defects in those materials which might explain why an accident occurred, or the source of a specific material to identify a criminal. Many analytical methods used for material identification may be used in investigations, the exact set being determined by the nature of the material in question, be it metal, glass, ceramic, polymer or composite. An important aspect is the analysis of trace evidence such as skid marks on exposed surfaces, where contact between dissimilar materials leaves material traces of one left on the other. Provided the traces can be analysed successfully, then an accident or crime can often be reconstructed. Another aim will be to determine the cause of a broken component using the technique of fractography.
Metals and alloys
Metal surfaces can be analyzed in a number of ways, including by spectroscopy and EDX used during scanning electron microscopy. The nature and composition of the metal can normally be established by sectioning and polishing the bulk, and examining the flat section using optical microscopy after etching solutions have been used to provide contrast in the section between alloy constituents. Such solutions (often an acid) attack the surface preferentially, so isolating features or inclusions of one composition, enabling them to be seen much more clearly than in the polished but untreated surface. Metallography is a routine technique for examining the microstructure of metals, but can also be applied to ceramics, glasses and polymers. SEM can often be critical in determining failures modes by examining fracture surfaces. The origin of a crack can be found and the way it grew assessed, to distinguish, for example, overload failure from fatigue. Often however, fatigue fractures are easy to distinguish from overload failures by the lack of ductility, and the existence of a fast crack growth region and the slow crack growth area on the fracture surface. Crankshaft fatigue for example is a common failure mode for engine parts. The example shows just two such zones, the slow crack at the base, the fast at the top.
Ceramics and glasses
Hard products like ceramic pottery and glass windscreens can be studied using the same SEM methods used for metals, especially ESEM conducted at low vacuum. Fracture surfaces are especially valuable sources of information because surface features like hachures can enable the origin or origins of the cracks to be found. Analysis of the surface features is carried out using fractography.
The position of the origin can then be matched with likely loads on the product to show how an accident occurred, for example. Inspection of bullet holes can often show the direction of travel and energy of the impact, and the way common glass products like bottles can be analysed to show whether deliberately or accidentally broken in a crime or accident. Defects such as foreign particles will often occur near or at the origin of the critical crack, and can be readily identified by ESEM.
Polymers and composites
Thermoplastics, thermosets, and composites can be analyzed using FTIR and UV spectroscopy as well as NMR and ESEM. Failed samples can either be dissolved in a suitable solvent and examined directly (UV, IR and NMR spectroscopy) or as a thin film cast from solvent or cut using microtomy from the solid product. The slicing method is preferable since there are no complications from solvent absorption, and the integrity of the sample is partly preserved. Fractured products can be examined using fractography, an especially useful method for all fractured components using macrophotography and optical microscopy. Although polymers usually possess quite different properties to metals and ceramics, they are just as susceptible to failure from mechanical overload, fatigue and stress corrosion cracking if products are poorly designed or manufactured. Many plastics are susceptible to attack by active chemicals like chlorine, present at low levels in potable water supplies, especially if the injection mouldings are faulty.
ESEM is especially useful for providing elemental analysis from viewed parts of the sample being investigated. It is effectively a technique of microanalysis and valuable for examination of trace evidence. On the other hand, colour rendition is absent, and there is no information provided about the way in which those elements are bonded to one another. Specimens will be exposed to a vacuum, so any volatiles may be removed, and surfaces may be contaminated by substances used to attach the sample to the mount.
Elastomers
Rubber products are often safety-critical parts of machines, so that failure can often cause accidents or loss of function. Failed products can be examined with many of the generic polymer methods, although it is more difficult if the sample is vulcanized or cross-linked. Attenuated total reflectance infra-red spectroscopy is useful because the product is usually flexible so can be pressed against the selenium crystal used for analysis. Simple swelling tests can also help to identify the specific elastomer used in a product. Often the best technique is ESEM using the X-ray elemental analysis facility on the microscope. Although the method only provides elemental analysis, it can provide clues as to the identity of the elastomer being examined. Thus the presence of substantial amounts of chlorine indicates polychloroprene while the presence of nitrogen indicates nitrile rubber. The method is also useful in confirming ozone cracking by the large amounts of oxygen present on cracked surfaces. Ozone attacks susceptible elastomers such as natural rubber, nitrile rubber and polybutadiene and associated copolymers. Such elastomers possess double bonds in their main chains, the group which is attacked during ozonolysis.
The problem occurs when small concentrations of ozone gas are present near to exposed elastomer surfaces, such as O-rings and diaphragm seals. The product must be in tension, but only very low strains are sufficient to cause degradation.
See also
Applied spectroscopy
Brittleness
Circumstantial evidence
Forensic engineering
Forensic polymer engineering
Forensic science
Fracture
Fractography
Fracture mechanics
Ozone cracking
Polymer degradation
Skid mark
Stress corrosion cracking
Trace evidence
References
Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004).
Lewis, Peter Rhys Forensic Polymer Engineering: Why polymer products fail in service, 2nd edition, Woodhead/Elsevier (2016).
Engineering disciplines
Materials science
Materials engineering | Forensic materials engineering | [
"Physics",
"Materials_science",
"Engineering"
] | 1,289 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
15,964,820 | https://en.wikipedia.org/wiki/European%20Chemical%20Society | The European Chemical Society (EuChemS) is a European non-profit organisation which promotes collaboration between non-profit scientific and technical societies in the field of chemistry.
Based in Brussels, Belgium, the association took over the role and responsibilities of the Federation of European Chemical Societies and Professional Institutions (FECS) founded in 1970. It currently has 50 Member Societies and supporting members, with a further 19 divisions and working parties. It represents more than 160,000 chemists from more than 30 countries in Europe.
On 26 August 2022, the EuChemS General Assembly voted Angela Agostiano, Professor at the University of Bari Aldo Moro, Italy, as EuChemS President-Elect. Her term as President began in January 2023. Nineta Hrastelj is Secretary General. Floris Rutjes of Radboud University, is Vice-President of EuChemS.
Aims and function
The European Chemical Society has two major aims. By bringing together national chemical societies from across Europe, it aims to foster a community of scientists from different countries and provide opportunities for them to exchange ideas, communicate, cooperate on work projects and develop their networks. EuChemS in turn relies on the knowledge of this community to provide sound scientific advice to policymakers at the European level, in order to better inform their decision-making work. EuChemS is an official accredited stakeholder of the European Food Safety Agency (EFSA) and the European Chemical Agency (ECHA). EuChemS also relies on quality science communication to better inform citizens, decision-makers and scientists of the latest research developments in the chemical sciences, and their role in tackling major societal, environmental and economic challenges.
Because the field of chemistry is particularly vast with many different disciplines within it, EuChemS provides advice and knowledge on a broad range of subjects including:
EU Research Framework Programmes, such as Horizon 2020 and Horizon Europe
Open Science
Education and STEM
Environmental issues and climate change
Circular Economy
Renewable Energy
Food safety
Science literacy
Health
Ethics and scientific integrity
Cultural Heritage
Chemical and nuclear safety
EuChemS is a signatory of the EU Transparency register. The register number is: 03492856440-03.
Divisions and Working Parties
The EuChemS scientific divisions and working parties are networks in their own fields of expertise and promote collaboration with other European and international organisations. They organise high quality scientific conferences in chemical and molecular sciences and interdisciplinary areas.
Division of Analytical Chemistry
Division of Chemical Education
Division of Chemistry and the Environment
Division of Chemistry in Life Sciences
Division of Computational Chemistry
Division of Food Chemistry
Division of Green and Sustainable Chemistry
Division of Inorganic Chemistry
Division of Nuclear and Radiochemistry
Division of Organic Chemistry
Division of Organometallic Chemistry
Division of Physical Chemistry
Division of Solid State Chemistry
Division of Chemistry and Energy
Working Party on Chemistry for Cultural Heritage
Working party on Ethics in Chemistry
Working Party on the History of Chemistry
The European Young Chemists' Network (abbreviated to EYCN) is the younger members' division of EuChemS.
Events
EuChemS organises a variety of different events, including policy workshops with the European Institutions, specialised academic conferences, as well as the biennial EuChemS Chemistry Congress (ECC). There have been 8 Congresses so far since the first in 2006, held in Budapest, Hungary.
The congresses have taken place in: Turin, Italy (2008); Nuremberg, Germany (2010); Prague, Czechia (2012); Istanbul, Turkey (2014); Seville, Spain (2016); Liverpool, UK (2018), Lisbon, Portugal (2022). The next ECC is set to be held in Dublin, Ireland in 2024. The ECCs usually attract some 2000 chemists from more than 50 countries across the world.
Awards
EuChemS proposes several awards including the European Chemistry Gold Medal Award, awarded in 2018 to Nobel Laureate Bernard Feringa and in 2020 to Michele Parrinello; the EuChemS Award for Service; the EuChemS Lecture Award; the European Young Chemists' Award; the EuChemS EUCYS Award; the EuChemS Historical Landmarks Award, as well as several Divisional Awards.
EuChemS implemented in 2020 the EuChemS Chemistry Congress fellowship scheme. The aim of EuChemS fellowship scheme is to support early career researchers (bachelor, masters and PhD students) actively attending the EuChemS Chemistry Congresses.
EuChemS Gold Medal
The EuChemS Gold medal is awarded to reflect the exceptional achievements of scientists working in the field of chemistry in Europe.
2022
Dame Carol Robinson
2020
Michele Parrinello
2018
Bernard L. Feringa
EuChemS Historical Landmarks Awards
The EuChemS Historical Landmarks Award recognize sites important in the history of chemistry in Europe:
2020
Prague, Czech Republic (50 anniversary of the foundation of EuChemS).
Giessen, Germany, Justus Liebig’s Laboratory.
2019
Almadén mines in Spain (producing mercury for Spain and the Spanish empire) and Edessa Cannabis Factory Museum, Greece (a preserved factory producing ropes and twine from hemp).
2018
The Ytterby mine in Sweden (linked to the discovery of 8 chemical elements) and ABEA in Crete, Greece (a factory processing olive oil).
Projects and activities
In light of the UN declared International Year of the Periodic Table of Chemical Elements of 2019, EuChemS published a Periodic Table which depicts the issue of the abundance of the chemical elements to raise awareness of the need to develop better recycling capacities, to manage waste, and to find alternative materials to the elements that are at risk of being unusable.
Members & Supporting Members
Austrian Chemical Society
Austrian Society of Analytical Chemistry
Royal Flemish Chemical Society
Walloon Royal Society of Chemistry
Union of Chemists in Bulgaria
Croatian Chemical Society
Pancyprian Union of Chemists
Czech Chemical Society
Danish Chemical Society
Estonian Chemical Society
Finnish Chemical Society
French Chemical Society
German Chemical Society
German Bunsen Society for Physical Chemistry
Association of Greek Chemists
Hungarian Chemical Society
Institute of Chemistry of Ireland
Israel Chemical Society
Italian Chemical Society
Lithuanian Chemical Society
Association of Luxembourgish Chemists
Society of Chemists and Technologists of Macedonia
Chemical Society of Montenegro
Royal Dutch Chemical Society
Norwegian Chemical Society
Polish Chemical Society
Portuguese Chemical Society
Portuguese Electrochemical Society
Romanian Chemical Society
Mendeleev Russian Chemical Society
Russian Scientific Council on Analytical Chemistry
Serbian Chemical Society
Slovak Chemical Society
Slovenian Chemical Society
Royal Spanish Chemical Society
Spanish Society of Analytical Chemistry (SEQA)
Catalan Chemical Society
Swedish Chemical Society
Swiss Chemical Society
Turkish Chemical Society
Royal Society of Chemistry
Supporting members:
European Nanoporous Materials Institute of Excellence (ENMIX)
European Chemistry Thematic Network Association (ECTN)
European Federation of Managerial Staff in the Chemical and Allied Industries (FECCIA)
European Research Institute of Catalysis (ERIC)
European Federation for Medicinal Chemistry (EFMC)
International Sustainable Chemistry Collaborative Centre (ISC3)
ChemPubSoc Europe
Italian National Research Council (CNR)
See also
European Chemist
European Physical Society
Timeline of chemistry
European Research Council
Marie Skłodowska-Curie Actions
References
External links
EuChemS
EuChemS Newsletter
Brussels News Updates
1st EuChemS Chemistry Congress 2006
2nd EuChemS Chemistry Congress 2008
3rd EuChemS Chemistry Congress 2010
4th EuChemS Chemistry Congress 2012
5th EuChemS Chemistry Congress 2014
6th EuChemS Chemistry Congress 2016
7th EuChemS Chemistry Congress 2018
8th EuChemS Chemistry Congress 2022
Chemistry societies
International scientific organizations based in Europe
Organizations established in 1970 | European Chemical Society | [
"Chemistry"
] | 1,496 | [
"Chemistry societies",
"nan"
] |
15,966,023 | https://en.wikipedia.org/wiki/Pyramid%20%28image%20processing%29 | Pyramid, or pyramid representation, is a type of multi-scale signal representation developed by the computer vision, image processing and signal processing communities, in which a signal or an image is subject to repeated smoothing and subsampling. Pyramid representation is a predecessor to scale-space representation and multiresolution analysis.
Pyramid generation
There are two main types of pyramids: lowpass and bandpass.
A lowpass pyramid is made by smoothing the image with an appropriate smoothing filter and then subsampling the smoothed image, usually by a factor of 2 along each coordinate direction. The resulting image is then subjected to the same procedure, and the cycle is repeated multiple times. Each cycle of this process results in a smaller image with increased smoothing, but with decreased spatial sampling density (that is, decreased image resolution). If illustrated graphically, the entire multi-scale representation will look like a pyramid, with the original image on the bottom and each cycle's resulting smaller image stacked one atop the other.
A bandpass pyramid is made by forming the difference between images at adjacent levels in the pyramid and performing image interpolation between adjacent levels of resolution, to enable computation of pixelwise differences.
Pyramid generation kernels
A variety of different smoothing kernels have been proposed for generating pyramids. Among the suggestions that have been given, the binomial kernels arising from the binomial coefficients stand out as a particularly useful and theoretically well-founded class. Thus, given a two-dimensional image, we may apply the (normalized) binomial filter (1/4, 1/2, 1/4) typically twice or more along each spatial dimension and then subsample the image by a factor of two. This operation may then proceed as many times as desired, leading to a compact and efficient multi-scale representation. If motivated by specific requirements, intermediate scale levels may also be generated where the subsampling stage is sometimes left out, leading to an oversampled or hybrid pyramid. With the increasing computational efficiency of CPUs available today, it is in some situations also feasible to use wider supported Gaussian filters as smoothing kernels in the pyramid generation steps.
Gaussian pyramid
In a Gaussian pyramid, subsequent images are weighted down using a Gaussian average (Gaussian blur) and scaled down. Each pixel containing a local average corresponds to a neighborhood pixel on a lower level of the pyramid. This technique is used especially in texture synthesis.
Laplacian pyramid
A Laplacian pyramid is very similar to a Gaussian pyramid but saves the difference image of the blurred versions between each levels. Only the smallest level is not a difference image to enable reconstruction of the high resolution image using the difference images on higher levels. This technique can be used in image compression.
Steerable pyramid
A steerable pyramid, developed by Simoncelli and others, is an implementation of a multi-scale, multi-orientation band-pass filter bank used for applications including image compression, texture synthesis, and object recognition. It can be thought of as an orientation selective version of a Laplacian pyramid, in which a bank of steerable filters are used at each level of the pyramid instead of a single Laplacian or Gaussian filter.
Applications of pyramids
Alternative representation
In the early days of computer vision, pyramids were used as the main type of multi-scale representation for computing multi-scale image features from real-world image data. More recent techniques include scale-space representation, which has been popular among some researchers due to its theoretical foundation, the ability to decouple the subsampling stage from the multi-scale representation, the more powerful tools for theoretical analysis as well as the ability to compute a representation at any desired scale, thus avoiding the algorithmic problems of relating image representations at different resolution. Nevertheless, pyramids are still frequently used for expressing computationally efficient approximations to scale-space representation.
Detail manipulation
Levels of a Laplacian pyramid can be added to or removed from the original image to amplify or reduce detail at different scales. However, detail manipulation of this form is known to produce halo artifacts in many cases, leading to the development of alternatives such as the bilateral filter.
Some image compression file formats use the Adam7 algorithm or some other interlacing technique.
These can be seen as a kind of image pyramid.
Because those file format store the "large-scale" features first, and fine-grain details later in the file,
a particular viewer displaying a small "thumbnail" or on a small screen can quickly download just enough of the image to display it in the available pixels—so one file can support many viewer resolutions, rather than having to store or generate a different file for each resolution.
See also
Mipmap
Scale space implementation
Level of detail
JPEG 2000#Multiple resolution representation
References
External links
Gaussian-Laplacian Pyramid Image Coding - illustrates methods of Downsampling, Upsampling, and Gaussian convolution
The Gaussian Pyramid - provides a brief introduction for the procedure and cites several sources
Laplacian Irregular Graph Pyramid - Figure 1 on this page illustrates an example of the Gaussian Pyramid
The Laplacian Pyramid as a Compact Image Code on eBook Submission
Image processing
Computer vision | Pyramid (image processing) | [
"Engineering"
] | 1,067 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
15,966,621 | https://en.wikipedia.org/wiki/Z-SAN | Z-SAN is a proprietary type of storage area network licensed by Zetera corporation. Z-SAN hardware is bundled with a modified version of SAN-FS, which is a shared disk file system driver and management software product SAN File System (SFS) made by DataPlow. The shared disk file system allows multiple computers to access the same volume at block level. Zetera calls their version of the file system Z-SAN.
The Z-SAN software license is purchased as part of the hardware package and is similar to ATA over Ethernet (AoE) sold by Coraid and LeftHand Networks. Zetera does not sell products directly, but instead licenses its technology to various other companies such as Netgear and Bell Microproducts. Like AoE, Z-SAN is intended to be a low-cost alternative to Fibre Channel and iSCSI. They do this by eliminating the need for the host adapter and TCP offload engine hardware, as well as use standard Ethernet switches instead of the more expensive Fibre Channel switches.
While AoE is mostly supported on Linux, Z-SAN is supported on Microsoft Windows platforms. A Z-SAN can array many more disks than a standard RAID. The Zetera website claims that MIT has a Z-SAN array totaling 1.4 Petabytes of storage. The disk arrays can be both striped and mirrored.
In 2005, the software was licensed to Netgear to be used in the Netgear SC101 product.
References
External links
http://www.computerpoweruser.com/articles/archive/c0604/28c04/28c04.pdf Article about Z-SAN
http://www.zetera.com/
Storage area networks | Z-SAN | [
"Technology"
] | 357 | [
"Computing stubs",
"Computer network stubs"
] |
15,967,005 | https://en.wikipedia.org/wiki/Hadamard%27s%20lemma | In mathematics, Hadamard's lemma, named after Jacques Hadamard, is essentially a first-order form of Taylor's theorem, in which we can express a smooth, real-valued function exactly in a convenient manner.
Statement
Proof
Consequences and applications
See also
Citations
References
Real analysis
Theorems in analysis | Hadamard's lemma | [
"Mathematics"
] | 66 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical problems",
"Mathematical theorems"
] |
15,967,058 | https://en.wikipedia.org/wiki/Quercus%20%C3%97%20warei | Quercus × warei is a hybrid oak tree in the genus Quercus. The tree is a hybrid of Quercus robur f. fastigiata (upright English oak) and Quercus bicolor (swamp white oak).
The hybrid is named for the American dendrologist George Ware, former Research Director at the Morton Arboretum in Illinois.
Cultivars
Two cultivars, 'Long' and 'Nadler' were patented. 'Nadler' was patented and trademarked in 2007. The mother tree of both cultivars is a Quercus robur f. fastigiata (upright English oak, a narrow form) growing in Columbia, Missouri. The ortet of 'Nadler' is growing in Jacksonville, Illinois. Approximately 1000 seeds were collected from the mother plant in 1974 and propagated, with two selected for further development as cultivars, which are now propagated clonally.
The 'Long' cultivar is marketed under the trade designation , and the 'Nadler' cultivar under the trade designation . 'Nadler' oaks are 11 m (35 ft) tall with a limb spread of 2 m (6 ft) at an age of 30 years. 'Nadler' and 'Long' are highly resistant to powdery mildew, which plagues the Q. robur parent. This clone also exhibits heterosis (hybrid vigor).
References
Dirr, Michael A. 2009. Manual of woody landscape plants: Their identification, ornamental characteristics, culture, propagation and uses. 6th edition. Stipes Publishing Co. Champaign, Illinois. pp. 933
Growing Trends: The Official Publication of the Illinois Green Industry Association. "Landscape Architects Meet Heritage Trees... Heritage Trees Meet Some Landscape Architects." July 2007, pp. 16, 19
2008–2009 Catalog of Robinson Nursery Inc.
2009–2010 Catalog of Heritage Seedlings Inc., KCK Farms LLC, Kuenzi Turf and Nursery, Robinson Nursery Inc. and Seseter Farms.
External links
Kindred Spirit Additional Info
warei
Hybrid plants | Quercus × warei | [
"Biology"
] | 421 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
15,967,917 | https://en.wikipedia.org/wiki/Sodium%20citrate | Sodium citrate may refer to any of the sodium salts of citric acid (though most commonly the third):
Monosodium citrate
Disodium citrate
Trisodium citrate
The three forms of salt are collectively known by the E number E331.
Applications
Food
Sodium citrates are used as acidity regulators in food and drinks, and also as emulsifiers for oils. They enable cheeses to melt without becoming greasy and also reduce the acidity of food. They are generally considered safe and are designated GRAS by the FDA.
Blood clotting inhibitor
Sodium citrate is used to prevent donated blood from clotting in storage, and can also be used as an additive for apheresis to prevent clots forming in the tubes of the machine. By binding with calcium ions in the blood it prevents the process of coagulation. It is also used as an anticoagulant for laboratory testing, in that blood samples are collected into sodium citrate-containing tubes for tests such as the PT (INR), APTT, and fibrinogen levels. Sodium citrate is used in medical contexts as an alkalinizing agent in place of sodium bicarbonate, to neutralize excess acid in the blood and urine.
Metabolic acidosis
It has applications for the treatment of metabolic acidosis and chronic kidney disease.
Ferrous nanoparticles
Along with oleic acid, sodium citrate may be used in the synthesis of magnetic Fe3O4 nanoparticle coatings.
References
Citrates
Chelating agents
Organic sodium salts
E-number additives | Sodium citrate | [
"Chemistry"
] | 335 | [
"Chelating agents",
"Organic sodium salts",
"Process chemicals",
"Salts"
] |
15,968,309 | https://en.wikipedia.org/wiki/Laboratory%20B | Laboratory B (Russian: ), also known as Object B () or Object 2011 during its period of operation, was a former Soviet nuclear research site constructed in 1946 by in Chelyabinsk Oblast in Russia. Operated under the 9th Chief Directorate of the Soviet Ministry of Internal Affairs, it was a major site for the Soviet program of nuclear weapons that works on handling, treatment, and the use of the radioactive products generated in reactors, as well as radiation biology, dosimetry, and radiochemistry. It had two divisions: radiochemistry and radiobiophysics; the latter was headed by N. V. Timofeev-Resovskij.
Laboratory B was run as a sharashka—a secret facility run as a prison, with at least ten of its German staff classified as prisoners of war from World War II. For two years, the German chemist Nikolaus Riehl was the scientific director.
It was closed in 1955, and has since been abandoned and left as a ruin.
Creation
From early in 1945, Colonel General A. P. Zavenyagin, as head of the 9th Chief Directorate of the NKVD (MVD after 1946), was responsible for the acquisition of German scientists, equipment, materiel, and intellectual property, under the Russian Alsos, to help Russia with the Soviet atomic bomb project. The issue of Decree No. 9877 from the Council of Ministers on 20 August 1945 created a special committee of which Zavenyagin was a member,
Zavenyagin was responsible for establishing, building, managing, and providing security for facilities supporting the atomic bomb project. Zavenyagin's purview also included the resources of the Gulag; some of the facilities to which the German scientists were assigned were run as a sharashka. German scientists were available for recruitment from the Soviet occupation zone in Germany. Also, immediately after World War II and extending into 1949, the Russians also had a large pool of German PoW scientists and highly skilled specialists from which to recruit; the main camp was at Krasnogorsk.
Facilities to which the German scientists were assigned were under the under authority of the 9th Chief Directorate and included the following (with annotations of prominent Germans at the facilities):
Laboratory 2 (later known as the Kurchatov Institute of Atomic Energy and today as the Russian Scientific Center "Kurchatov Institute") in Moscow. – Josef Schintlmeister.
Scientific Research Institute No. 9 (NII-9; today the Bochvar All-Russian Scientific Research Institute of Inorganic Materials, Bochvar VNIINM) in Moscow – Max Volmer and Robert Döpel.
Elektrostal' Plant No. 12 – A. Baroni (PoW), Hans-Joachim Born (PoW), Alexander Catsch (PoW), Werner Kirst, H. E. Ortmann, Przybilla, Nikolaus Riehl, Herbert Schmitz (PoW), Herbert Thieme, Tobein, Günter Wirths, and Karl Zimmer (PoW).
Institutes A (in Sinop, a suburb of Sukhumi) and G (in Agudzery) created for Manfred von Ardenne and Gustav Hertz, respectively. Institutes A and G were later used as the basis for the Sukhumi Physico-Technical Institute (SFTI); today it is the State Scientific Production Association "SFTI". Institute A – Ingrid Schilling, Fritz Schimohr, Fritz Schmidt, Gerhard Siewert, Max Steenbeck (PoW), Peter Adolf Thiessen, and Karl-Franz Zühlke. Institute G – Heinz Barwich, Werner Hartmann, and Justus Mühlenpfordt.
Laboratory V was created for Heinz Pose in Obninsk, and it was run as a sharashka. Laboratory V was later renamed the Physics and Power Engineering Institute (FEhI or IPPE); today the "State Scientific Center of the Russian Federation - A.I. Leipunsky Physics and Power Engineering Institute" (JSC SSC RF - FEI) – Werner Czulius, Walter Hermann, Hans Jürgen von Oertzen, Ernst Rexer, Karl-Heinrich Riewe, and Carl Friedrich Weiss.
Laboratory B in Sungul' was established by a decree of the Council of Ministers in 1946, and it was run as a Sharashka. In 1955, it was assimilated into a new, second nuclear weapons institute, Scientific Research Institute-1011 (NII-1011), today known as the Russian Federal Nuclear Center All-Russian Scientific Research Institute of Technical Physics (RFYaTs–VNIITF). – Hans-Joachim Born (PoW), Alexander Catsch (PoW), Willi Lange, Nikolaus Riehl, and Karl Zimmer (PoW).
Research conducted
Laboratory B had two scientific divisions, a radiobiophysics division headed by the geneticist N. V. Timofeev-Resovskij (prisoner), and a radiochemistry division headed by Sergej Aleksandrovich Voznesenskij (prisoner).
Radiobiophysics
In 1925, as the Russian part of a collaborative effort between Russia and Germany, the Russians sent Timofeev-Resovskij, and his colleague Sergei Romanovich Tsarapkin, to Germany. There, they worked with Oskar Vogt, director of the Kaiser-Wilhelm Institut für Hirnforschung (KWIH, Kaiser Wilhelm Institute for Brain Research), to establish the Abteilung für Experimentelle Genetik (Department of Experimental Genetics) and Timofeev-Resovskij became its director. Timofeev-Resovskij stayed in Germany through World War II, and built his department to world-renowned status. On the basis of false denunciations, Timofeev-Resovskij and Tsarapkin were arrested by the NKVD in September 1945, returned to Russia, and both sentenced to 10 years in the Gulag. They ended up in the Karaganda prison camp in northern Kazakhstan, one of the most terrible camps in the Gulag; the harsh conditions of Timofeev-Resovskij's transportation and incarceration in the labor camp contributed to a significant decline in his health, including the degradation of his vision brought on by malnutrition. Colonel General Zavenyagin, who had intended to utilize Timofeev-Resovskij's talents in the Soviet atomic bomb project, had Timofeev-Resovskij and Tsarapkin sent to Laboratory B in 1947. Timofeev-Resovskij's wife Elena Aleksandrovna, after receipt of a letter in his handwriting, left Berlin in 1948, with their son Andrew, to join him in Sungul'. The house occupied by the three Timofeev-Resovskijs was every bit as nice as that planned for the German scientists working at the Sungul' institute. (In 1992, Timofeev-Resovskij was rehabilitated, 11 years after his death!)
Born, Catsch, and Zimmer, who had worked for Timofeev-Resovskij in Berlin and who were sent to Laboratory B by Riehl in December 1947, were able to conduct work similar to that which they had done in Germany, and all three became section heads in Timofeev-Resovskij's department. Born examined fission products, developed methods of separating plutonium from fission products created in a nuclear reactor, and investigated and developed radiation health and safety measures. Catsch began his work on developing methods to extract radionucleotides from various organs, which he would continue when he left Russia.
The radiobiophysics division under Timofeev-Resovskij had four sections which conducted experimental studies in four basic directions:
Effects of radioactive isotopes on animals.
Cytological effects of radiation on plants and animals.
Effects of weak concentrations of radioactive materials and low doses of ionizing radiation, mainly on crop cultivated plants.
Effects of the distribution and accumulation of different radioactive materials introduced into the soil, ground water, and freshwater bodies.
The agrobiological and hydrobiological experiments were united on the general basis of the biogeochemical analysis of the experimentally created elementary biogeocenosis and the introduction of special factor radioactive materials into it.
Radiochemistry
On the basis of a false denunciation, Sergej Aleksandrovich Voznesenskij was arrested in June 1941; in April 1942, he was sentenced to 10 years in the Gulag. From March 1943 to 1947, he led a research group in the 4th Special Department of the NKVD in Moscow; the 4th Special Department provided military research and development by utilizing specialist prisoners, i.e., scientists. In December 1947, he was transferred to Laboratory B to head up the radiochemistry division. With the liquidation of Laboratory B and its merger into NII-1011 in 1955, Voznesenskij was transferred to the Ural Polytechnical Institute to head up the Department of Radiochemistry, and was simultaneously appointed as a scientific consultant at Combine No. 817 on problems of radioactive waste cleanup. (Voznesenskij had been fully rehabilitate in May 1953.)
The radiochemistry division had four sections and conducted research and development in the following areas:
Development of methods of cleaning radioactive waste water.
Development of the most appropriate structures for the storage of radioactive waste.
Study of radioactive isotope ion exchange.
Development of spectroscopic methods for the analysis of complex mixtures of radioactive components.
Study of the precipitation of radioactive fragments.
Development of methods to obtain clean (chistykh) isotopic preparations from the solutions of fission fragments of uranium, supplied by Combine No. 817 in nearby Ozersk.
Overview
Owing to its proximity to the radiochemical plutonium facility Combine No. 817, the scientists at the institute had access to high-dose radioactive materials.
The scientific staff at Laboratory B – a Sharashka – was both Soviet and German, the former being mostly political prisoners or exiles, although some of the service staff were criminals – one had been convicted of murder. In 1955, the institute had 451 staff members; in 1946 there had been 95. The institute had a maximum of 26 German scientists, and more than 10 of them initially were classified as PoWs. The German contingent left the institute in 1953. The institute had two departments: radiobiophysics (No. 1) and radiochemistry (No. 2). In 1955, the institute was merged into the newly created second nuclear weapons design institute Nauchno-Issledovatel'skij Institut-1011 (NII-1011). During the merger, the radiopathology section of the radiochemistry department was transferred to Combine No. 817 (Ozersk) and a section of the radiobiophysics department was transferred to the Ural Branch of the USSR Academy of Sciences.
Accomplishments of Laboratory B include the development of technology for the isolation of fission by-products such as strontium-90, caesium-137, zirconium-65, and the technology to remove these isotopes from chemical compounds.
Personnel
Directors
The first director of Laboratory B, starting in 1946, was MVD Colonel Alexander Konstantinovich Uralets, who had previously worked on the Soviet atomic bomb project. He received the Order of Lenin for his management of Laboratory B.
From 26 December 1952 to 14 June 1955, the director was the chemist Gleb Arkad'evich Sereda.
Scientific directors
Nikolaus Riehl was the scientific director of Laboratory B from September 1950 to early autumn in 1952.
Riehl, scientific director of the Auergesellschaft, was sent by the Russians, in 1945, to head a group at Plant No. 12 in Ehlektrostal' to develop an industrial process for production of reactor-grade uranium. Other Germans sent to work there included A. Baroni (PoW), Werner Kirst, Henry E. Ortmann (chemist from Auergesellschaft), Przybilla, Herbert Schmitz (PoW), Herbert Thieme, Tobein, and Günter Wirths (chemist from Auergesellschaft). When Riehl learned that professional colleagues from the Kaiser-Wilhelm Institut für Hirnforschung (Kaiser Wilhelm Institute for Brain Research) in Berlin, Hans-Joachim Born and Karl Zimmer, were being held in Krasnogorsk, in the main PoW camp for Germans with scientific degrees, Riehl arranged though Zavenyagin to have them sent to Ehlektrostal'. Alexander Catsch was also sent there. At Ehlektrostal', Riehl had a hard time incorporating Born, Catsch, and Zimmer into his tasking on uranium production, as Born was a radiochemist, Catsch was a physician and radiation biologist, and Zimmer was a physicist and radiation biologist; in December 1947, Riehl sent all three to Laboratory B to work with Timofeev-Resovskij.
After the detonation of the Russian uranium bomb, uranium production was going smoothly and Riehl's oversight was no longer necessary at Plant No. 12. Riehl then went, in 1950, to be the scientific director of Laboratory B, where he stayed until 1952. Essentially the remaining personnel in his Ehlektrostal' group were assigned elsewhere, with the exception of Henry E. Ortmann, A. Baroni (PoW), and Herbert Schmitz (PoW), who went with Riehl to Sungul'.
Besides those already mentioned, other Germans at Laboratory were Rinatia von Ardenne (sister of Manfred von Ardenne, director of Institute A, in Sukhumi) Wilhelm Menke (botanist), Willi Lange (who married the widow of Karl-Heinrich Riewe, who had been at Heinz Pose's Laboratory V, in Obninsk), Joachim Pani, and K. K. Rintelen. Until Riehl's return to Germany in June 1955, which Riehl had to request and negotiate, he was quarantined in Agudzery (Agudseri) starting in 1952; Augudzery, was the location of Institute G.
Bibliography
Albrecht, Ulrich, Andreas Heinemann-Grüder, and Arend Wellmann Die Spezialisten: Deutsche Naturwissenschaftler und Techniker in der Sowjetunion nach 1945 (Dietz, 1992, 2001)
Babkov, V. V. Nikolaj Vladimiorovich Timofeev-Resovskij [In Russian], Vestnik VOGiS Article 5, Number 15, 8-14 (2000)
Emel'yanov, B. M. and V. S. Gavril'chenko (editors) Laboratory B. The Sungul' Phenomena. [In Russian] (RFYaTs-VNIITF, 2000)
Izvarina, E. Nuclear project in the Urals: History in Photographs [In Russian] (Okonchanie. Nachalo v No. 12) (Russian Academy of Sciences, Ural Branch, 2006)
Knight, Amy "Beria, Stalin's First Lieutenant" (Princeton, 1993)
Kozubov, G. Sungul' Conference, August 2000, Vestnik Instituta Biologii Komi NTs UrO RAN Issue 36, 2000
Kruglov, Arkadii The History of the Soviet Atomic Industry (Taylor and Francis, 2002)
Maddrell, Paul "Spying on Science: Western Intelligence in Divided Germany 1945 – 1961" (Oxford, 2006)
Medvedev, Zhores A. Nikolai Wladimirovich Timofeeff-Ressovsky (1900-1981), Genetics Volume 100, Number 1, 1-5 (1982)
Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949 (Belknap, 1995)
Oleynikov, Pavel V. German Scientists in the Soviet Atomic Project, The Nonproliferation Review Volume 7, Number 2, 1 – 30 (2000). The author has been a group leader at the Institute of Technical Physics of the Russian Federal Nuclear Center in Snezhinsk (Chelyabinsk-70).
Paul, Diane B. and Costas B. Krimbas Nikolai V. Timofeeff-Ressovsky, Scientific American Volume 266, Number 2, 86-92 (1992)
Penzina, V. V. Archive of the Russian Federal Nuclear Center of the All-Russian Scientific Research Institute of Technical Physics, named after E. I. Zababakhin. Resource No. 1 – Laboratory "B". [In Russian] (VNIITF). Penzina is cited as head of the VNIITF Archive in Snezhinsk.
Polunin, V. V. and V. A. Staroverov Personnel of Special Services in the Soviet Atomic Project 1945 – 1953 [In Russian] (FSB, 2004)
Ratner, V. A. Session in Memory of N. V. Timofeev-Resovskij in the Institute of Cytology and Genetics of the Siberian Department of the Russian Academy of Sciences [In Russian], Vestnik VOGis Article 4, No. 15 (2000).
Riehl, Nikolaus and Frederick Seitz Stalin's Captive: Nikolaus Riehl and the Soviet Race for the Bomb (American Chemical Society and the Chemical Heritage Foundations, 1996) .
Timofeev-Resovskij, N. V. Kratkaya Avtobiograficheskaya Zapiska (Brief Autobiographical Note) (14 October 1977).
Vazhnov, M. Ya. A. P. Zavenyagin: Pages from His Life (chapters from the book) [In Russian]`.
Vogt, Annette Ein russisches Forscherehepaar in Berlin-Buch, Edition Luisenstadt (1998)
External links
Arzama-16
A. V. Buldakov - Joint International Biographical Center [In Russian]
Chelyabinsk-70 - All-Russian Scientific Research Institute of Technical Physics (Chelyabinsk-70) [In Russian]
Demidov, A. A. On the tracks of one "Anniversary" [In Russian] 11.08.2005
Fonotov, Mikhail Undercover People [In Russian], Ural'skaya Nov''' Number 13 (2002)
GlobalSecurity.org Chelyabinsk-65 / Ozersk Combine 817 / Production Association MayakGlobalSecurity.org Chelyabinsk-70 / Snezhinsk. Russian Federal Nuclear Center All-Russian Institute of Technical Physics (VNIITF)Kovaleva, Svetlana Lev and the Atom [In Russian] 2003-02-26
Kozubov, G. Sungul' Conference, August 2000, Vestnik Instituta Biologii Komi NTs UrO RAN Issue 36, 2000
(ОНИС) – Opytnaya Nauchno-Issledovatel'skaya Stantsiya (ONIS, Pilot Scientific Research Station).
Polunin, V. V. and V. A. Staroverov Personnel of Special Services in the Soviet Atomic Project 1945 – 1953 [In Russian] (FSB, 2004)
RFYaTs-VNIITF - Key Dates in the History of the RFYaTs-VNIITF [In Russian]
RFYaTS-VNIITF Creators – See entry for ТИМОФЕЕВ-РЕСОВСКИЙ Николай Владимирович (TIMOFEEV-RESOVSKIJ Nikolaj Vladimorovich) [In Russian]
RFYaTS-VNIITF Creators – See entry for УРАЛЕЦ Александр Константинович (URALETs Aleksandr Konctantinovich) [In Russian]
RFYaTS-VNIITF Creators – See entry for ВОЗНЕСЕНСКИЙ Сергей Александрович (VOZNESENSKIJ Sergej Aleksandrovich) [In Russian]
Sulakshin, S. S. (Scientific Editor) Social and Political Process of Economic Status of Russia [In Russian] 2005Sungulʹ Conference – Anniversary International Conference [In Russian] UNESCO"Я ПРОЖИЛ СЧАСТЛИВУЮ ЖИЗНЬ" К 90-летию со дня рождения Н. В. Тимофеева-Ресовского ("I Lived a Happy Life" – In Honor of the 90th Anniversary of the Birth of Timofeev-Resovskij, ИСТОРИЯ НАУКИ. БИОЛОГИЯ (History of Science – Biology)'', 1990, No. 9, 68-104 (1990). This commemorative has many photographs of Timofeev-Resovskij.
References
Research institutes in Russia
Research institutes in the Soviet Union
NKVD
Radiobiology
Radiochemistry
Soviet biological weapons program
Nuclear weapons program of the Soviet Union
Abandoned buildings and structures | Laboratory B | [
"Chemistry",
"Biology"
] | 4,481 | [
"Radiobiology",
"Radiochemistry",
"Radioactivity"
] |
15,968,804 | https://en.wikipedia.org/wiki/Section%20modulus | In solid mechanics and structural engineering, section modulus is a geometric property of a given cross-section used in the design of beams or flexural members. Other geometric properties used in design include: area for tension and shear, radius of gyration for compression, and second moment of area and polar second moment of area for stiffness. Any relationship between these properties is highly dependent on the shape in question. There are two types of section modulus, elastic and plastic:
The elastic section modulus is used to calculate a cross-section's resistance to bending within the elastic range, where stress and strain are proportional.
The plastic section modulus is used to calculate a cross-section's capacity to resist bending after yielding has occurred across the entire section. It is used for determining the plastic, or full moment, strength and is larger than the elastic section modulus, reflecting the section's strength beyond the elastic range.
Equations for the section moduli of common shapes are given below. The section moduli for various profiles are often available as numerical values in tables that list the properties of standard structural shapes.
Note: Both the elastic and plastic section moduli are different to the first moment of area. It is used to determine how shear forces are distributed.
Notation
Different codes use varying notations for the elastic and plastic section modulus, as illustrated in the table below.
The North American notation is used in this article.
Elastic section modulus
The elastic section modulus is used for general design. It is applicable up to the yield point for most metals and other common materials. It is defined as
where:
is the second moment of area (or area moment of inertia, not to be confused with moment of inertia), and
is the distance from the neutral axis to the most extreme fibre.
It is used to determine the yield moment strength of a section
where is the yield strength of the material.
The table below shows formulas for the elastic section modulus for various shapes.
Plastic section modulus
The plastic section modulus is used for materials and structures where limited plastic deformation is acceptable. It represents the section's capacity to resist bending once the material has yielded and entered the plastic range. It is used to determine the plastic, or full, moment strength of a section
where is the yield strength of the material.
Engineers often compare the plastic moment strength against factored applied moments to ensure that the structure can safely endure the required loads without significant or unacceptable permanent deformation. This is an integral part of the limit state design method.
The plastic section modulus depends on the location of the plastic neutral axis (PNA). The PNA is defined as the axis that splits the cross section such that the compression force from the area in compression equals the tension force from the area in tension. For sections with constant, equal compressive and tensile yield strength, the area above and below the PNA will be equal
These areas may differ in composite sections, which have differing material properties, resulting in unequal contributions to the plastic section modulus.
The plastic section modulus is calculated as the sum of the areas of the cross section on either side of the PNA, each multiplied by the distance from their respective local centroids to the PNA.
where:
is the area in compression
is the area in tension
are the distances from the PNA to their centroids.
Plastic section modulus and elastic section modulus can be related by a shape factor :
This is an indication of a section's capacity beyond the yield strength of material. The shape factor for a rectangular section is 1.5.
The table below shows formulas for the plastic section modulus for various shapes.
Use in structural engineering
In structural engineering, the choice between utilizing the elastic or plastic (full moment) strength of a section is determined by the specific application. Engineers follow relevant codes that dictate whether an elastic or plastic design approach is appropriate, which in turn informs the use of either the elastic or plastic section modulus. While a detailed examination of all relevant codes is beyond the scope of this article, the following observations are noteworthy:
When assessing the strength of long, slender beams, it is essential to evaluate their capacity to resist lateral torsional buckling in addition to determining their moment capacity based on the section modulus.
Although T-sections may not be the most efficient choice for resisting bending, they are sometimes selected for their architectural appeal. In such cases, it is crucial to carefully assess their capacity to resist lateral torsional buckling.
While standard uniform cross-section beams are often used, they may not be optimally utilized when subjected to load moments that vary along their length. For large beams with predictable loading conditions, strategically adjusting the section modulus along the length can significantly enhance efficiency and cost-effectiveness.
In certain applications, such as cranes and aeronautical or space structures, relying solely on calculations is often deemed insufficient. In these cases, structural testing is conducted to validate the load capacity of the structure.
See also
Beam theory
Buckling
List of area moments of inertia
Second moment of area
Structural testing
Yield strength
References
Structural analysis
Mechanical quantities | Section modulus | [
"Physics",
"Mathematics",
"Engineering"
] | 1,035 | [
"Structural engineering",
"Mechanical quantities",
"Physical quantities",
"Quantity",
"Structural analysis",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
15,969,110 | https://en.wikipedia.org/wiki/Allistatin | Allistatin is the collective name for two chemicals, allistatin I and allistatin II, which may be found in garlic. There is no conclusive evidence of its existence, or the existence of the related compound garlicin. It is a sulfur-free chemical and plays an active role within garlic. It is most likely a flavonoid.
There is no experimental evidence of the structure of allistatin; some studies claim it is similar to cyanidin, while others found it shared similarities with garlicin (but not allicin).
References
Garlic
Organosulfur compounds | Allistatin | [
"Chemistry"
] | 123 | [
"Organic compounds",
"Organosulfur compounds"
] |
15,969,473 | https://en.wikipedia.org/wiki/Mycena%20leaiana | Mycena leaiana, commonly known as the orange mycena or Lea's mycena, is a species of saprobic fungi in the genus Mycena, family Mycenaceae. Characterized by their bright orange caps and stalks and reddish-orange gill edges, they usually grow in dense clusters on deciduous logs. The pigment responsible for the orange color in this species has antibiotic properties. A variety of the species, Mycena leaiana var. australis, can be found in Australia and New Zealand.
Taxonomy
Originally named Agaricus leajanus by the English biologist Miles Joseph Berkeley in 1845, Pier Andrea Saccardo was later (1891) to move it to the genus Mycena when the large genus Agaricus was divided. The species was named after Thomas Gibson Lea (1785–1844), a mushroom collector from Ohio who had sent a collection of specimens to Berkeley for identification.
Description
The hygrophanous cap is in diameter, and initially rounded or bell-shaped but becoming expanded and convex with age, often with a depression in the center. The color is a bright orange that fades as the mushroom matures. The surface of the cap is sticky, especially in moist weather, and smooth, while the margin often has striations. The trama is soft, watery, and white. The gills are adnexed in attachment (gills narrowly attached/tapering toward stem so that their attachment is almost free), crowded together, and yellowish in color, with the color deepening to bright orange-red at the edges. The deepening in color at the edges is due to an orange pigment that is contained largely within cells called cheilocystidia. If handled, the yellow pigment will rub off and stain the skin.
The stipe is typically long by 2–4 mm thick. The diameter of the stipe is more or less equal throughout its length, although it may be slightly enlarged at the base. It is orange in color, and has fine hairs on the upper portion, and denser hairs at the base. The orange mycena has no distinctive taste, and a slightly mealy odor. Spores are elliptical in shape, smooth, amyloid, and have dimensions of 7–10 × 5–6 μm. The spore print is white.
The species is regarded as nonpoisonous.
Mycena leaiana var. australis is a variety of Mycena leaiana found in Australia and New Zealand. In all but the color it is similar to M. leaiana. However, M. leaiana had been found primarily in the east of the United States (and specifically not on the Pacific coast at all) upon the discovery of specimens in Australia. Given this wide geographical separation (as well as the difference in cap color) a new varietal name was proposed.
Habitat and distribution
Mycena leaiana is a common species, and grows in dense cespitose clusters (with stipes sharing a single point of origin) on hardwood logs and branches. It is a North American species, and has been reported throughout the eastern and central United States and Canada. The variant Mycena leaiana var. australis can be found in Australia and New Zealand.
Bioactive compounds
Mycena leaiana produces the orange pigment leainafulvene, a member of the class of chemical compounds known as isoilludanes. Leainafulvene has weak antibacterial activity against Acinetobacter calcoaceticus, and has pronounced cytotoxic activity towards tumor cells. It also has mutagenic activity, as measured by the Ames test.
Similar species
Mycena texensis A.H. Sm. (1937) is closely related, but has been described as having "grayish colors of the cap". It is better distinguished microscopically: it has smaller spores, shorter and narrower basidia, and distinctive cystidia.
References
leaiana
Fungi described in 1845
Fungi of North America
Fungi of Australia
Taxa named by Miles Joseph Berkeley
Fungus species | Mycena leaiana | [
"Biology"
] | 830 | [
"Fungi",
"Fungus species"
] |
15,969,686 | https://en.wikipedia.org/wiki/Porosome | Porosomes are cup-shaped supramolecular structures in the cell membranes of eukaryotic cells where secretory vesicles transiently dock in the process of vesicle fusion and secretion. The transient fusion of secretory vesicle membrane at a porosome, base via SNARE proteins, results in the formation of a fusion pore or continuity for the release of intravesicular contents from the cell. After secretion is complete, the fusion pore temporarily formed at the base of the porosome is sealed. Porosomes are few nanometers in size and contain many different types of protein, especially chloride and calcium channels, actin, and SNARE proteins that mediate the docking and fusion of the vesicles with the cell membrane. Once the vesicles have docked with the SNARE proteins, they swell, which increases their internal pressure. They then transiently fuse at the base of the porosome, and these pressurized contents are ejected from the cell. Examination of cells following secretion using electron microscopy, demonstrate increased presence of partially empty vesicles following secretion. This suggested that during the secretory process, only a portion of the vesicular contents are able to exit the cell. This could only be possible if the vesicle were to temporarily establish continuity with the cell plasma membrane, expel a portion of its contents, then detach, reseal, and withdraw into the cytosol (endocytose). In this way, the secretory vesicle could be reused for subsequent rounds of exo-endocytosis, until completely empty of its contents.
Porosomes vary in size depending on the cell type. Porosome in the exocrine pancreas and in endocrine and neuroendocrine cells range from 100 nm to 180 nm in diameter while in neurons they range from 10 nm to 15 nm (about 1/10 the size of pancreatic porosomes). When a secretory vesicle containing v-SNARE docks at the porosome base containing t-SNARE, membrane continuity (ring complex) is formed between the two. The size of the t/v-SNARE complex is directly proportional to the size of the vesicle. These vesicles contain dehydrated proteins (non-active) which are activated once they are hydrated. GTP is required for the transport of water through the water channels or Aquaporins, and ions through ion channels to hydrate the vesicle. Once the vesicle fuses at the porosome base, the contents of the vesicle at high pressure are ejected from the cell.
Generally, the porosomes are opened and closed by actin, however, neurons require a fast response therefore they have central plugs that open to release contents and close to stop the release (the composition of the central plug is yet to be discovered). Porosomes have been demonstrated to be the universal secretory machinery in cells. The neuronal porosome proteome has been solved, providing the possible molecular architecture and the complete composition of the machinery.
History of discovery
The porosome was discovered in the early to mid-1990s by a team led by Professor Bhanu Pratap Jena at Yale University School of Medicine, using atomic force microscopy.
References
Further reading
External links
Molecular Machinery & Mechanism of Cell Secretion Jena Lab at Wayne State University School of Medicine
Cell anatomy
Membrane biology | Porosome | [
"Chemistry"
] | 717 | [
"Membrane biology",
"Molecular biology"
] |
15,969,808 | https://en.wikipedia.org/wiki/Tribromoisocyanuric%20acid | Tribromoisocyanuric acid (C3Br3N3O3) is a chemical compound used as a reagent for bromination in organic synthesis. It is a white crystalline powder with a strong bromine odour. It is similar to trichloroisocyanuric acid.
Uses
Tribromoisocyanuric acid is used for the bromination of aromatics and alkenes.
References
Organobromides
Reagents for organic chemistry
Ureas
Triazines | Tribromoisocyanuric acid | [
"Chemistry"
] | 108 | [
"Organic compounds",
"Reagents for organic chemistry",
"Ureas"
] |
15,969,980 | https://en.wikipedia.org/wiki/Jerzy%20Giedymin | Jerzy Giedymin (September 18, 1925 – June 24, 1993) was a philosopher and historian of mathematics and science.
Life
Giedymin, of Polish origin, was born in 1925.
He studied at the University of Poznań under Kazimierz Ajdukiewicz. In 1953 Jerzy Giedymin succeeded Adam Wiegner at the Chair of Logic at the Faculty of Philosophy.
The so-called Poznań School was a Marxist current of philosophy marked by an idealisational theory of science which emphasised the scientific features of Marxism in close confrontation with contemporary logic and epistemology.
In 1968 Giedymin moved to England and attended seminars by Karl Popper at the London School of Economics.
In 1971 he came to Sussex to become Professor at the School of Mathematical and Physical Sciences of the University of Sussex.
Giedymin died during a trip to Poland on 24 June 1993.
Work
Giedymin was convinced that Henri Poincaré's conventionalist philosophy was fundamentally misunderstood and thus underestimated. Giedymin argues that Poincaré was at the origin of much of the 20th century's innovations in relativity theory and quantum physics.
Giedymin's standpoint was much influenced by his exposure to Kazimierz Ajdukiewicz's perception of the history of ideas which in defiance of traditional empiricism reviews the philosophy of science of the early 20th century in the light of pragmatic conventionalism.
Bibliography
Books
Jerzy Giedymin, Z problemow logicznych analizy historycznej [Some Logical Problems of Historical Analysis], Poznanskie towarzystwo przyjaciol nauk. Wydzial filologiczno-filozoficzny. Prace Komisji filozoficznej. tom 10. zesz. 3., Poznań, 1961.
Jerzy Giedymin, Problemy, zalozenia, rozstrzygniecia. Studia nad logicznymi podstawami nauk spolecznych [Questions, assumptions, decidability. Essays concerning the logical functions of the social sciences], Polskie Towarzystwo Ekonomiczne. Oddzial w Poznaniu. Rozprawy i monografie. No. 10, Poznań, 1964.
Jerzy Giedymin ed., Kazimierz Ajdukiewicz: The scientific world-perspective and other essays, 1931-1963, Dordrecht: D. Reidel Publishing Co., 1974
Jerzy Giedymin, Science and convention: essays on Henri Poincaré’s philosophy of science and the conventionalist tradition, Oxford: Pergamon, 1982
Articles (selection)
Jerzy Giedymin, "Confirmation, critical region and empirical content of hypotheses", in Studia Logica, Volume 10, Number 1 (1960)
Jerzy Giedymin, "A generalization of the refutability postulate", in Studia Logica, Volume 10, Number 1 (1960)
Jerzy Giedymin, "Authorship hypotheses and reliability of informants", in Studia Logica, Volume 12, Number 1 (1961)
Jerzy Giedymin, "Reliability of Informants", in British Journal for the Philosophy of Science, XIII (1963)
Jerzy Giedymin, "The Paradox of Meaning Variance", in British Journal for the Philosophy of Science, 21 (1970)
Jerzy Giedymin, "Consolations for the Irrationalist", in British Journal for the Philosophy of Science, 22 (1971)
Jerzy Giedymin, "Antipositivism in Contemporary Philosophy of Social Sciences and Humanities", in British Journal for the Philosophy of Science, 26 (1975)
Jerzy Giedymin, "On the origin and significance of Poincaré's conventionalism", in Studies in History and Philosophy of Science, Vol.8, No.4 (1977)
Jerzy Giedymin, "Revolutionary changes, non-translatability and crucial experiments", in Problems of the Philosophy of Science, Amsterdam: North Holland, 1968
Jerzy Giedymin, "The Physics of the Principles and Its Philosophy: Hamilton, Poincaré and Ramsey", in Science and Convention: Essays on Henri Poincaré's Philosophy of Science and the Conventionalist Tradition. Oxford: Pergamon (1982)
Jerzy Giedymin, "Geometrical and Physical Conventionalism of Henri Poincaré in Epistemologial Formulation", in Studies in History and Philosophy of Science, 22 (1991)
Jerzy Giedymin, "Conventionalism, the Pluralist Conception of Theories and the Nature of Interpretation", in Studies in History and Philosophy of Science, 23 (1992)
Jerzy Giedymin, "Radical Conventionalism, Its Background and Evolution: Poincare, Leroy, Ajdukiewicz", in Vito Sinisi & Jan Wolenski (ed.), The Heritage of Kazimierz Ajdukiewicz, Amsterdam, Rodopi, 1995
Jerzy Giedymin, "Ajdukiewicz's Life and Personality", in Vito Sinisi & Jan Wolenski (ed.), The Heritage of Kazimierz Ajdukiewicz, Amsterdam, Rodopi, 1995
Jerzy Giedymin, "Strength, Confirmation, Compatibility", in Mario Bunge (ed.), in Critical Approaches to Science and Philosophy (Science and Technology Studies). Piscataway, N.J.:Transaction Publishers (1998)
About Jerzy Giedymin
Laurent Rollet, Le conventionnalisme géométrique de Henri Poincaré : empirisme ou apriorisme ? Une étude des thèses de Adolf Grünbaum et Jerzy Giedymin, Université de Nancy 2, 1993
Laurent Rollet, "The Grünbaum-Giedymin Controversy Concerning the Philosophical Interpretation of Poincaré's Geometrical Conventionalism" in Krystyna Zamiara (ed.) The Problems Concerning the Philosophy of Science and Science Itself, Poznań, Wydawnictwo Fundacji Humaniora (1995)
Krystyna Zamaria, "Jerzy Giedymin – From the Logic of Science to the Theoretical History of Science", in Wladyslaw Krajewski (ed.), Polish Philosophers of Science and Nature in the 20th Century, Amsterdam, 2001
External links
Obituary: Jerzy Giedymin Obituary published in The Independent
The Poznań School Presentation of the Poznań school
Polish Logic of the Postwar Period Article by Jan Zygmunt of the University of Wrocław
The Giedymin - Grünbaum Controversy Concerning the Philosophical Interpretation of Geometrical Conventionalism Article by Laurent Rollet
French Conventionalism and its Influence on Polish Philosophy Article by Anna Jedynak in Parerga – MIĘDZYNARODOWE STUDIA FILOZOFICZNE, No. 2 (2007)
20th-century Polish philosophers
1925 births
1993 deaths
Polish logicians
Academics of the University of Sussex
Philosophers of science
Mathematical logicians | Jerzy Giedymin | [
"Mathematics"
] | 1,493 | [
"Mathematical logic",
"Mathematical logicians"
] |
15,970,956 | https://en.wikipedia.org/wiki/Haagerup%20property | In mathematics, the Haagerup property, named after Uffe Haagerup and also known as Gromov's a-T-menability, is a property of groups that is a strong negation of Kazhdan's property (T). Property (T) is considered a representation-theoretic form of rigidity, so the Haagerup property may be considered a form of strong nonrigidity; see below for details.
The Haagerup property is interesting to many fields of mathematics, including harmonic analysis, representation theory, operator K-theory, and geometric group theory.
Perhaps its most impressive consequence is that groups with the Haagerup Property satisfy the Baum–Connes conjecture and the related Novikov conjecture. Groups with the Haagerup property are also uniformly embeddable into a Hilbert space.
Definitions
Let be a second countable locally compact group. The following properties are all equivalent, and any of them may be taken to be definitions of the Haagerup property:
There is a proper continuous conditionally negative definite function .
has the Haagerup approximation property, also known as Property : there is a sequence of normalized continuous positive-definite functions which vanish at infinity on and converge to 1 uniformly on compact subsets of .
There is a strongly continuous unitary representation of which weakly contains the trivial representation and whose matrix coefficients vanish at infinity on .
There is a proper continuous affine isometric action of on a Hilbert space.
Examples
There are many examples of groups with the Haagerup property, most of which are geometric in origin. The list includes:
All compact groups (trivially). Note all compact groups also have property (T). The converse holds as well: if a group has both property (T) and the Haagerup property, then it is compact.
SO(n,1)
SU(n,1)
Groups acting properly on trees or on -trees
Coxeter groups
Amenable groups
Groups acting properly on CAT(0) cubical complexes
Sources
Representation theory
Geometric group theory | Haagerup property | [
"Physics",
"Mathematics"
] | 416 | [
"Geometric group theory",
"Group actions",
"Fields of abstract algebra",
"Representation theory",
"Symmetry"
] |
15,972,344 | https://en.wikipedia.org/wiki/Kappa%20Crucis%20%28star%29 | Kappa Crucis (κ Cru, HD 111973) is a spectroscopic binary star in the open cluster NGC 4755, which is also known as the Kappa Crucis Cluster or Jewel Box Cluster.
Location
κ Crucis is one of the brightest members of the open cluster that bears its name, better known as the Jewel Box Cluster. It forms one leg, at bottom right or south, of the prominent letter "A" asterism at the centre of the cluster. The cluster is part of the larger Centaurus OB1 association and lies about 8,500 light-years away.
The cluster, and κ Cru itself, is just to the south-east of β Crucis, the lefthand star of the famous Southern Cross.
Properties
κ Crucis is a B3 bright supergiant (luminosity class Ia). Radial velocity variations in the spectral lines indicate that it has an unresolved companion star. It is over 100,000 times the luminosity of the Sun, partly due to its higher temperature over , and partly to its larger size. The κ Crucis cluster has a calculated age of 11.2 million years.
References
External links
Crux
Crucis, Kappa
111973
B-type supergiants
062931
4890
Durchmusterung objects
J12534890-6022344
Spectroscopic binaries | Kappa Crucis (star) | [
"Astronomy"
] | 291 | [
"Crux",
"Constellations"
] |
15,972,589 | https://en.wikipedia.org/wiki/DL%20Crucis | DL Crucis is a variable star in the constellation Crux.
Visibility
DL Crucis has a visual apparent magnitude of 6.3 so it is just visible with the unaided eye in dark skies. It lies in the small southern constellation of Crux, halfway between η Crucis and ζ Crucis and close to the constellation's brightest star α Crucis. This area of sky lies within the Milky Way and close to the Coalsack Nebula.
Properties
DL Crucis has a spectral type of B1.5 Ia, making it a luminous blue supergiant with a temperature over 20,000 K and 251,000 times as luminous as the sun. It has a radius around 42 times, and a mass 30 times that of the Sun.
Variability
In 1977 DL Crucis, then referred to as HR 4653, was being used as a comparison star to test the variability of δ Crucis. δ Crucis turned out to be constant relative to several other stars, but the difference in brightness between it and HR 4653 changed by 0.02 magnitude. It was considered likely to be a variable with a period longer than seven hours.
Hipparcos photometry showed that DL Crucis was varying by up to 0.04 magnitude with a main period of 2 days 21 hours It was classified as an α Cygni variable. Shortly afterwards it received its variable star designation of DL Crucis.
A later detailed statistical analysis of the same data found periods of 3.650 and 3.906 days, as well as a first harmonic pulsation, with a maximum brightness range of 0.11 magnitudes.
References
Crux
Alpha Cygni variables
B-type supergiants
Crucis, DL
106343
4653
Durchmusterung objects
059678 | DL Crucis | [
"Astronomy"
] | 370 | [
"Crux",
"Constellations"
] |
15,972,636 | https://en.wikipedia.org/wiki/Planarity%20testing | In graph theory, the planarity testing problem is the algorithmic problem of testing whether a given graph is a planar graph (that is, whether it can be drawn in the plane without edge intersections). This is a well-studied problem in computer science for which many practical algorithms have emerged, many taking advantage of novel data structures. Most of these methods operate in O(n) time (linear time), where n is the number of edges (or vertices) in the graph, which is asymptotically optimal. Rather than just being a single Boolean value, the output of a planarity testing algorithm may be a planar graph embedding, if the graph is planar, or an obstacle to planarity such as a Kuratowski subgraph if it is not.
Planarity criteria
Planarity testing algorithms typically take advantage of theorems in graph theory that characterize the set of planar graphs in terms that are independent of graph drawings.
These include
Kuratowski's theorem that a graph is planar if and only if it does not contain a subgraph that is a subdivision of K5 (the complete graph on five vertices) or K3,3 (the utility graph, a complete bipartite graph on six vertices, three of which connect to each of the other three).
Wagner's theorem that a graph is planar if and only if it does not contain a minor (subgraph of a contraction) that is isomorphic to K5 or K3,3.
The Fraysseix–Rosenstiehl planarity criterion, characterizing planar graphs in terms of a left-right ordering of the edges in a depth-first search tree.
The Fraysseix–Rosenstiehl planarity criterion can be used directly as part of algorithms for planarity testing, while Kuratowski's and Wagner's theorems have indirect applications: if an algorithm can find a copy of K5 or K3,3 within a given graph, it can be sure that the input graph is not planar and return without additional computation.
Other planarity criteria, that characterize planar graphs mathematically but are less central to planarity testing algorithms, include:
Whitney's planarity criterion that a graph is planar if and only if its graphic matroid is also cographic,
Mac Lane's planarity criterion characterizing planar graphs by the bases of their cycle spaces,
Schnyder's theorem characterizing planar graphs by the order dimension of an associated partial order, and
Colin de Verdière's planarity criterion using spectral graph theory.
Algorithms
Path addition method
The classic path addition method of Hopcroft and Tarjan was the first published linear-time planarity testing algorithm in 1974. An implementation of Hopcroft and Tarjan's algorithm is provided in the Library of Efficient Data types and Algorithms by Mehlhorn, Mutzel and Näher. In 2012, Taylor extended this algorithm to generate all permutations of cyclic edge-order for planar embeddings of biconnected components.
Vertex addition method
Vertex addition methods work by maintaining a data structure representing the possible embeddings of an induced subgraph of the given graph, and adding vertices one at a time to this data structure. These methods began with an inefficient O(n2) method conceived by Lempel, Even and Cederbaum in 1967. It was improved by Even and Tarjan, who found a linear-time solution for the s,t-numbering step, and by Booth and Lueker, who developed the PQ tree data structure. With these improvements it is linear-time and outperforms the path addition method in practice. This method was also extended to allow a planar embedding (drawing) to be efficiently computed for a planar graph. In 1999, Shih and Hsu simplified these methods using the PC tree (an unrooted variant of the PQ tree) and a postorder traversal of the depth-first search tree of the vertices.
Edge addition method
In 2004, John Boyer and Wendy Myrvold developed a simplified O(n) algorithm, originally inspired by the PQ tree method, which gets rid of the PQ tree and uses edge additions to compute a planar embedding, if possible. Otherwise, a Kuratowski subdivision (of either K5 or K3,3) is computed. This is one of the two current state-of-the-art algorithms today (the other one is the planarity testing algorithm of de Fraysseix, Ossona de Mendez and Rosenstiehl). See for an experimental comparison with a preliminary version of the Boyer and Myrvold planarity test. Furthermore, the Boyer–Myrvold test was extended to extract multiple Kuratowski subdivisions of a non-planar input graph in a running time linearly dependent on the output size. The source code for the planarity test and the extraction of multiple Kuratowski subdivisions is publicly available. Algorithms that locate a Kuratowski subgraph in linear time in vertices were developed by Williamson in the 1980s.
Construction sequence method
A different method uses an inductive construction of 3-connected graphs to incrementally build planar embeddings of every 3-connected component of G (and hence a planar embedding of G itself). The construction starts with K4 and is defined in such a way that every intermediate graph on the way to the full component is again 3-connected. Since such graphs have a unique embedding (up to flipping and the choice of the external face), the next bigger graph, if still planar, must be a refinement of the former graph. This allows to reduce the planarity test to just testing for each step whether the next added edge has both ends in the external face of the current embedding. While this is conceptually very simple (and gives linear running time), the method itself suffers from the complexity of finding the construction sequence.
Dynamic algorithms
Planarity testing has been studied in the Dynamic Algorithms model, in which one maintains an answer to a problem (in this case planarity) as the graph undergoes local updates, typically in the form of insertion/deletion of edges. In the edge-arrival case, there is an asympotically tight inverse-Ackermann function update-time algorithm due to La Poutré, improving upon algorithms by Di Battista, Tamassia, and Westbrook. In the fully-dynamic case where edges are both inserted and deleted, there is a logarithmic update-time lower bound by Pătrașcu and Demaine, and a polylogarithmic update-time algorithm by Holm and Rotenberg, improving on sub-linear update-time algorithms by Eppstein, Galil, Italiano, Sarnak, and Spencer.
References
Planar graphs
Computational problems in graph theory | Planarity testing | [
"Mathematics"
] | 1,442 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Planar graphs",
"Mathematical relations",
"Planes (geometry)",
"Mathematical problems"
] |
15,973,371 | https://en.wikipedia.org/wiki/Suicidal%20person | A suicidal person is one who is experiencing a personal suicide crisis; the person is seeking a means to die by suicide, or is contemplating suicide.
Recognizing a suicidal person
A suicidal person may exhibit certain behaviors. A person who has a preoccupation with death, talks excessively about suicide, or becomes socially withdrawn may be contemplating suicide. Other behaviors in suicidal people include reckless behaviors (such as increased drug and alcohol use, or taking unnecessary risks like dangerous driving), unexpected or unusual farewells to family and friends, and seeking out means to kill themselves (such as acquiring pills, guns, or other lethal objects).
Causes
See Discussion on causes of suicide for more information.
In many cases, suicide is an attempt to escape a situation that causes unbearable suffering. A majority of those that die by suicide suffer from depression, alcoholism or other mental health problems such as bipolar disorder. Some that die by suicide have organic disorders such as brain trauma and epilepsy.
Views on suicidal people
Legal
For much of history, people that attempted suicide were seen as violating laws against murder. By the eleventh century, courts "regarded suicide as 'murder of oneself'" and was "therefore viewed...as a criminal act." These legal repercussions for suicide have existed until modern times. England had laws against suicide until 1961, and between 1946 and 1956 "over 5,000 [people] were found guilty [of attempting suicide] and sentenced to either jail or prison." The United States too had laws against suicide as late as 1964, and Islamic holy law also forbids suicide.
Other factors
Other factors that influence suicidal people are families with a history of suicide and cultural or religious beliefs that glorify suicide.
See also
Suicide
Suicidal ideation
Suicide intervention
Notes and references
Suicide | Suicidal person | [
"Biology"
] | 359 | [
"Behavior",
"Human behavior",
"Suicide"
] |
15,974,212 | https://en.wikipedia.org/wiki/Geohash | Geohash is a public domain geocode system invented in 2008 by Gustavo Niemeyer which encodes a geographic location into a short string of letters and digits. Similar ideas were introduced by G.M. Morton in 1966. It is a hierarchical spatial data structure which subdivides space into buckets of grid shape, which is one of the many applications of what is known as a Z-order curve, and generally space-filling curves.
Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision). Geohashing guarantees that the longer a shared prefix between two geohashes is, the spatially closer they are together. The reverse of this is not guaranteed, as two points can be very close but have a short or no shared prefix.
History
The core part of the Geohash algorithm and the first initiative to similar solution was documented in a report of G.M. Morton in 1966, "A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing". The Morton work was used for efficient implementations of Z-order curve, like in this modern (2014) Geohash-integer version (based on directly interleaving 64-bit integers), but his geocode proposal was not human-readable and was not popular.
Apparently, in the late 2000s, G. Niemeyer still didn't know about Morton's work, and reinvented it, adding the use of base32 representation. In February 2008, together with the announcement of the system, he launched the website http://geohash.org, which allows users to convert geographic coordinates to short URLs which uniquely identify positions on the Earth, so that referencing them in emails, forums, and websites is more convenient.
Many variations have been developed, including OpenStreetMap's short link (using base64 instead of base32) in 2009, the 64-bit Geohash in 2014, the exotic Hilbert-Geohash in 2016, and others.
Typical and main usages
To obtain the Geohash, the user provides an address to be geocoded, or latitude and longitude coordinates, in a single input box (most commonly used formats for latitude and longitude pairs are accepted), and performs the request.
Besides showing the latitude and longitude corresponding to the given Geohash, users who navigate to a Geohash at geohash.org are also presented with an embedded map, and may download a GPX file, or transfer the waypoint directly to certain GPS receivers. Links are also provided to external sites that may provide further details around the specified
location.
For example, the coordinate pair 57.64911,10.40744 (near the tip of the peninsula of Jutland, Denmark) produces a slightly shorter hash of u4pruydqqvj.
The main usages of Geohashes are:
As a unique identifier.
To represent point data, e.g. in databases.
Geohashes have also been proposed to be used for geotagging.
When used in a database, the structure of geohashed data has two advantages. First, data indexed by geohash will have all points for a given rectangular area in contiguous slices (the number of slices depends on the precision required and the presence of geohash "fault lines"). This is especially useful in database systems where queries on a single index are much easier or faster than multiple-index queries. Second, this index structure can be used for a quick-and-dirty proximity search: the closest points are often among the closest geohashes.
Technical description
A formal description for Computational and Mathematical views.
Textual representation
For exact latitude and longitude translations Geohash is a spatial index of base 4, because it transforms the continuous latitude and longitude space coordinates into a hierarchical discrete grid, using a recurrent four-partition of the space. To be a compact code it uses base 32 and represents its values by the following alphabet, that is the "standard textual representation".
The "Geohash alphabet" (32ghs) uses all digits 0-9 and all lower case letters except "a", "i", "l" and "o".
For example, using the table above and the constant , the Geohash ezs42 can be converted to a decimal representation by ordinary positional notation:
[ezs42]32ghs =
=
=
= =
Geometrical representation
The geometry of the Geohash has a mixed spatial representation:
Geohashes with 2, 4, 6, ... e digits (even digits) are represented by Z-order curve in a "regular grid" where decoded pair (latitude, longitude) has uniform uncertainty, valid as Geo URI.
Geohashes with 1, 3, 5, ... d digits (odd digits) are represented by "И-order curve". Latitude and longitude of the decoded pair has different uncertainty (longitude is truncated).
It is possible to build the "И-order curve" from the Z-order curve by merging neighboring cells and indexing the resulting rectangular grid by the function . The illustration shows how to obtain the grid of 32 rectangular cells from the grid of 64 square cells.
The most important property of Geohash for humans is that it preserves spatial hierarchy in the code prefixes. For example, in the "1 Geohash digit grid" illustration of 32 rectangles, above, the spatial region of the code e (rectangle of greyish blue circle at position 4,3) is preserved with prefix e in the "2 digit grid" of 1024 rectangles (scale showing em and greyish green to blue circles at grid).
Algorithm and example
Using the hash ezs42 as an example, here is how it is decoded into a decimal latitude and longitude. The first step is decoding it from textual "base 32ghs", as showed above, to obtain the binary representation:
.
This operation results in the bits 01101 11111 11000 00100 00010. Starting to count from the left side with the digit 0 in the first position, the digits in the even positions form the longitude code (0111110000000), while the digits in the odd positions form the latitude code (101111001001).
Each binary code is then used in a series of divisions, considering one bit at a time, again from the left to the right side. For the latitude value, the interval −90 to +90 is divided by 2, producing two intervals: −90 to 0, and 0 to +90. Since the first bit is 1, the higher interval is chosen, and becomes the current interval. The procedure is repeated for all bits in the code. Finally, the latitude value is the center of the resulting interval. Longitudes are processed in an equivalent way, keeping in mind that the initial interval is −180 to +180.
For example, in the latitude code 101111001001, the first bit is 1, so we know our latitude is somewhere between 0 and 90. Without any more bits, we'd guess the latitude was 45, giving us an error of ±45. Since more bits are available, we can continue with the next bit, and each subsequent bit halves this error. This table shows the effect of each bit. At each stage, the relevant half of the range is highlighted in green; a low bit selects the lower range, a high bit selects the upper range.
The column "mean value" shows the latitude, simply the mean value of the range. Each subsequent bit makes this value more precise.
(The numbers in the above table have been rounded to 3 decimal places for clarity)
Final rounding should be done carefully in a way that
So while rounding 42.605 to 42.61 or 42.6 is correct, rounding to 43 is not.
Digits and precision in km
Limitations when used for deciding proximity
Edge cases
Geohashes can be used to find points in proximity to each other based on a common prefix. However, edge case locations close to each other but on opposite sides of the 180 degree meridian will result in Geohash codes with no common prefix (different longitudes for near physical locations). Points close to the North and South poles will have very different geohashes (different longitudes for near physical locations).
Two close locations on either side of the Equator (or Greenwich meridian) will not have a long common prefix since they belong to different 'halves' of the world. Put simply, one location's binary latitude (or longitude) will be 011111... and the other 100000...., so they will not have a common prefix and most bits will be flipped. This can also be seen as a consequence of relying on the Z-order curve (which could more appropriately be called an N-order visit in this case) for ordering the points, as two points close by might be visited at very different times. However, two points with a long common prefix will be close by.
In order to do a proximity search, one could compute the southwest corner (low geohash with low latitude and longitude) and northeast corner (high geohash with high latitude and longitude) of a bounding box and search for geohashes between those two. This search will retrieve all points in the z-order curve between the two corners, which can be far too many points. This method also breaks down at the 180 meridians and the poles. Solr uses a filter list of prefixes, by computing the prefixes of the nearest squares close to the geohash .
Non-linearity
Since a geohash (in this implementation) is based on coordinates of longitude and latitude the distance between two geohashes reflects the distance in latitude/longitude coordinates between two points, which does not translate to actual distance, see Haversine formula.
Example of non-linearity for latitude-longitude system:
At the Equator (0 Degrees) the length of a degree of longitude is 111.320 km, while a degree of latitude measures 110.574 km, an error of 0.67%.
At 30 Degrees (Mid Latitudes) the error is 110.852/96.486 = 14.89%
At 60 Degrees (High Arctic) the error is 111.412/55.800 = 99.67%, reaching infinity at the poles.
Note that these limitations are not due to geohashing, and not due to latitude-longitude coordinates, but due to the difficulty of mapping coordinates on a sphere (non linear and with wrapping of values, similar to modulo arithmetic) to two dimensional coordinates and the difficulty of exploring a two dimensional space uniformly. The first is related to Geographical coordinate system and Map projection, and the other to Hilbert curve and z-order curve. Once a coordinate system is found that represents points linearly in distance and wraps up at the edges, and can be explored uniformly, applying geohashing to those coordinates will not suffer from the limitations above.
While it is possible to apply geohashing to an area with a Cartesian coordinate system, it would then only apply to the area where the coordinate system applies.
Despite those issues, there are possible workarounds, and the algorithm has been successfully used in Elasticsearch, MongoDB, HBase, Redis, and Accumulo to implement proximity searches.
Similar indexing systems
An alternative to storing Geohashes as strings in a database are Locational codes, which are also called spatial keys and similar to QuadTiles.
In some geographical information systems and Big Data spatial databases, a Hilbert curve based indexation can be used as an alternative to Z-order curve, like in the S2 Geometry library.
In 2019 a front-end was designed by QA Locate in what they called GeohashPhrase to use phrases to code Geohashes for easier communication via spoken English language. There were plans to make GeohashPhrase open source.
C-squares (2002)
FixPhrase
GeoKey (2018, proprietary)
Ghana Post GPS (2017)
International Postcode system using Cubic Meters (CubicPostcode.com)
Maidenhead Locator System (1980)
Makaney Code (2011)
MapCode (2008)
Military Grid Reference System
Natural Area Code
Open Location Code (2014, aka. "plus codes", Google Maps)
QRA locator (1959)
Universal Transverse Mercator coordinate system
verbal-id
what3words (2013, proprietary)
WhatFreeWords
wherewords.id
wolo.codes
GEOREF (similar 2-digit hierarchy code)
Xaddress
3Geonames (2018, open source)
Licensing
The Geohash algorithm was put in the public domain by its inventor in a public announcement on February 26, 2008.
While comparable algorithms have been successfully patented and
had copyright claimed upon, GeoHash is based on an entirely different algorithm and approach.
Formal Standard
Geohash is standardized as CTA-5009. This standard follows the Wikipedia article as of the 2023 version but provides additional detail in a formal (normative) reference. In the absence of an official specification since the creation of Geohash, the CTA WAVE organization published CTA-5009 to aid in broader adoption and compatibility across implementers in the industry.
See also
List of geodesic-geocoding systems
Geohash-36 (is not a Geohash-variant)
Grid (spatial index)
Maidenhead Locator System
Military Grid Reference System
Morton number (number theory)
Natural Area Code
Numbering scheme
Open Location Code (plus code)
space-filling curves
what3words
Z-order curve
References
External links
Geohash approximations for JTS geometries
The Geohash Playground
Geographic coordinate systems
Geocodes
2008 introductions | Geohash | [
"Mathematics"
] | 2,848 | [
"Geographic coordinate systems",
"Coordinate systems"
] |
15,974,705 | https://en.wikipedia.org/wiki/Gelidiaceae | The Gelidiaceae is a small family of red algae containing eight genera. Many species of this algae are used to make agar.
Uses
Agar can be derived from many types of red seaweeds, including those from families such as Gelidiaceaae, Gracilariaceae, Gelidiellaceae and Pterocladiaceae. It is a polysaccharide located in the inner part of the red algal cell wall. It is used in food material, medicines, cosmetics, therapeutic and biotechnology industries.
References
Red algae families
Edible algae
Taxa named by Friedrich Traugott Kützing | Gelidiaceae | [
"Biology"
] | 126 | [
"Edible algae",
"Algae"
] |
9,288,958 | https://en.wikipedia.org/wiki/Snell%27s%20window | Snell's window (also called Snell's circle or optical man-hole) is a phenomenon by which an underwater viewer sees everything above the surface through a cone of light of width of about 96 degrees. This phenomenon is caused by refraction of light entering water, and is governed by Snell's Law. The area outside Snell's window will either be completely dark or show a reflection of underwater objects by total internal reflection.
Underwater photographers sometimes compose photographs from below such that their subjects fall inside Snell's window, which backlights and focuses attention on the subjects.
Image formation
Under ideal conditions, an observer looking up at the water surface from underneath sees a perfectly circular image of the entire above-water hemisphere—from horizon to horizon. Due to refraction at the air/water boundary, Snell's window compresses a 180° angle of view above water to a 97° angle of view below water, similar to the effect of a fisheye lens. The brightness of this image falls off to nothing at the circumference/horizon because more of the incident light at low grazing angles is reflected rather than refracted (see Fresnel equations). Refraction is very sensitive to any irregularities in the flatness of the surface (such as ripples or waves), which will cause local distortions or complete disintegration of the image. Turbidity in the water will veil the image behind a cloud of scattered light.
References
External links
Explanation of the physics behind Snell's window
Under-water photograph showing Snell's window
Water
Geometrical optics | Snell's window | [
"Environmental_science"
] | 330 | [
"Water",
"Hydrology"
] |
9,290,979 | https://en.wikipedia.org/wiki/Inocybe%20hystrix | Inocybe hystrix is an agaric fungus in the family Inocybaceae. It forms mycorrhiza with surrounding deciduous trees. Fruit bodies are usually found growing alone or in small groups on leaf litter during autumn months. Unlike many Inocybe species, Inocybe hystrix is densely covered in brown scales, a characteristic that aids in identification. The mushroom also has a spermatic odour that is especially noticeable when the mushroom is damaged or crushed.
Like many other Inocybe mushrooms, Inocybe hystrix contains dangerous amounts of muscarine and should not be consumed.
Taxonomy
The species was first described in 1838 by Elias Fries under the name Agaricus hystrix. Finnish mycologist Petter Karsten later (1879) transferred it to Inocybe.
Description
Fruit bodies have convex to plano-convex caps measuring in diameter. The caps are dry with scales that can be either erect or flat on the surface. The colour is brown in the centre, becoming paler towards the edges. The flesh is white, and has a spermatic odour and mild taste. The gills are closely spaced, white to dull brown, and have fringed edges. The stipe measures long by thick, and is roughly the same width throughout its length; like the cap, it is scaly.
The spore print is cinnamon brown. spores are roughly almond-shaped, smooth, inamyloid, and measure 8–12.5 by 5–6.5 μm. Clamp connections are present in the hyphae.
The species is poisonous.
Habitat and distribution
In North America and Europe, Inocybe hystrix grows in deciduous forest, especially beech. In Costa Rica, it is found in the Cordillera Talamanca, where it associates with Quercus costaricensis at elevations around .
See also
List of Inocybe species
References
hystrix
Fungi described in 1838
Fungi of Europe
Poisonous fungi
Taxa named by Elias Magnus Fries
Fungus species | Inocybe hystrix | [
"Biology",
"Environmental_science"
] | 417 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
9,291,245 | https://en.wikipedia.org/wiki/Pandemic%20severity%20index | The pandemic severity index (PSI) was a proposed classification scale for reporting the severity of influenza pandemics in the United States. The PSI was accompanied by a set of guidelines intended to help communicate appropriate actions for communities to follow in potential pandemic situations. Released by the United States Department of Health and Human Services (HHS) on February 1, 2007, the PSI was designed to resemble the Saffir-Simpson Hurricane Scale classification scheme. The index was replaced by the Pandemic Severity Assessment Framework in 2014, which uses quadrants based on transmissibility and clinical severity rather than a linear scale.
Development
The PSI was developed by the Centers for Disease Control and Prevention (CDC) as a new pandemic influenza planning tool for use by states, communities, businesses and schools, as part of a drive to provide more specific community-level prevention measures. Although designed for domestic implementation, the HHS has not ruled out sharing the index and guidelines with interested international parties.
The index and guidelines were developed by applying principles of epidemiology to data from the history of the last three major flu pandemics and seasonal flu transmission, mathematical models, and input from experts and citizen focus groups. Many "tried and true" practices were combined in a more structured manner:
Context
During the onset of a growing pandemic, local communities cannot rely upon widespread availability of antiviral drugs and vaccines (See Influenza research).
The goal of the index is to provide guidance as to what measures various organizations can enact that will slow down the progression of a pandemic, easing the burden of stress upon community resources while definite solutions, like drugs and vaccines, can be brought to bear on the situation. The CDC expects adoption of the PSI will allow early co-ordinated use of community mitigation measures to affect pandemic progression.
Guidelines
The index focuses less on how likely a disease will spread worldwide – that is, become a pandemic – and more upon how severe the epidemic actually is.
The main criterion used to measure pandemic severity will be case-fatality rate (CFR), the percentage of deaths out of the total reported cases of the disease.The actual implementation of PSI alerts was expected to occur after the World Health Organization (WHO) announces phase 6 influenza transmission (human to human) in the United States. This would probably result in immediate announcement of a PSI level 3–4 situation.
The analogy of "category" levels were introduced to provide an understandable connection to hurricane classification schemes, with specific reference to the aftermath of Hurricane Katrina.
Like the Saffir–Simpson Hurricane Scale, the PSI ranges from 1 to 5, with Category 1 pandemics being most mild (equivalent to seasonal flu) and level 5 being reserved for the most severe "worst-case" scenario pandemics (such as the 1918 Spanish flu).
The report recommends four primary social distancing measures for slowing down a pandemic:
Isolation and treatment of people who have suspected or confirmed cases of pandemic influenza
Voluntary home quarantine of household contacts of those with suspected or confirmed pandemic influenza
Dismissing school classes and closing daycare centers
Changing work schedules and canceling large public gatherings
These actions, when implemented, can have an overall effect of reducing the number of new cases of the disease; but they can carry potentially adverse consequences in terms of community and social disruption. The measures should have the most noticeable impact if implemented uniformly by organizations and governments across the US.
Response
While unveiling the PSI, Dr. Martin Cetron, Director for the Division of Global Migration and Quarantine at the CDC, reported that early feedback to the idea of a pandemic classification scale has been "uniformly positive".
The University of Minnesota's Center for Infectious Disease Research and Policy (CIDRAP) reports that the PSI has been "drawing generally high marks from public health officials and others, but they say the plan spells a massive workload for local planners". One MD praised that the PSI were "a big improvement over the previous guidance"; while historical influenza expert and author John M. Barry was more critical of the PSI, saying not enough emphasis was placed on basic health principles that could have an impact at the community level, adding "I'd feel a lot more comfortable with a lot more research [supporting them]".
During the initial press releases in 2007, the CDC acknowledge that the PSI and the accompanying guidelines were a work in progress and will likely undergo revision in the months following their release.
In 2014, after the 2009 swine flu pandemic, the PSI was replaced by the Pandemic Severity Assessment Framework, which uses quadrants based on transmissibility and clinical severity rather than a linear scale.
See also
Pandemic Severity Assessment Framework
2009 H1N1 influenza pandemic
Early Warning and Response System
WHO pandemic phases
References
Medical assessment and evaluation instruments
Pandemics
Centers for Disease Control and Prevention
Influenza pandemics
Index numbers | Pandemic severity index | [
"Mathematics"
] | 1,014 | [
"Index numbers",
"Mathematical objects",
"Numbers"
] |
9,291,334 | https://en.wikipedia.org/wiki/Limonin | Limonin is a limonoid, and a bitter, white, crystalline substance found in citrus and other plants. It is also known as limonoate D-ring-lactone and limonoic acid di-delta-lactone. Chemically, it is a member of the class of compounds known as furanolactones.
Sources
Limonin is enriched in citrus fruits and is often found at higher concentrations in seeds, for example orange and lemon seeds.
Presence in citrus products
Limonin and other limonoid compounds contribute to the bitter taste of some citrus food products. Researchers have proposed removal of limonoids from orange juice and other products (known as "debittering") through the use of polymeric films.
Research
Limonin is under basic research to assess its possible biological properties.
References
External links
"Citrus Compound: ready to help your body!" (Agricultural Research Service, USDA)
Epoxides
3-Furyl compounds
Delta-lactones
Terpenes and terpenoids | Limonin | [
"Chemistry"
] | 205 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Terpenes and terpenoids",
"Natural products"
] |
9,292,180 | https://en.wikipedia.org/wiki/Theatrical%20smoke%20and%20fog | Theatrical smoke and fog, also known as special effect smoke, fog or haze, is a category of atmospheric effects used in the entertainment industry. The use of fogs can be found throughout motion picture and television productions, live theatre, concerts, at nightclubs and raves, amusement and theme parks and even in video arcades and similar venues. These atmospheric effects are used for creating special effects, to make lighting and lighting effects visible, and to create a specific sense of mood or atmosphere. Recently smaller, cheaper fog machines have become available to the general public, and fog effects are becoming more common in residential applications, from small house parties to Halloween and Christmas.
Theatrical fog and theatrical fog machines are also becoming more prevalent in industrial applications outside of the entertainment industry, due to their ease of use, inherent portability and ruggedness. Common popular applications for theatrical fog include environmental testing (such as HVAC inspections) as well as emergency personnel and disaster response training exercises.
Militaries have historically used smoke and fog to mask troop movements in training and combat, the techniques of which are technologically similar to those used in theatre and film.
Health harms can be caused by short- and long-term exposure to artificial fogs. Some types of fog are less healthy than others. Handling the generating equipment also has health risks.
Types of effects
There are generally 4 types of fog effects used in entertainment applications: smoke, fog, haze, and "low-lying" effects.
Smoke
Smoke effects refers to theatrical atmospheric effects produced either by pyrotechnic materials, such as Smoke Cookies, and pre-fabricated smoke cartridges; or other, flammable substances such as incense or HVAC smoke pencils or pens.
Smoke is differentiated from other atmospheric effects in that it is composed of solid particles released during combustion, rather than the liquid droplets that fog or haze are composed of.
Fog
Fog is created by pumping one of a variety of different glycol or glycol/water mixtures (referred to as fog fluid) into a heat exchanger (essentially a block of metal with a resistance heating element in it) and heating until the fluid vapourises, creating a thick translucent or opaque cloud. Devices specifically manufactured for this purpose are referred to as fog machines.
An obsolete method for creating theatrical fog on-stage (although the technique is still used in motion pictures) is to use a device known as a thermal fogger, initially designed for distributing pesticide, which aspirates a petroleum product (typically kerosene or propane), ignites the fuel to create a flame, and then heats a mixture of air and pesticide to create a dense fog. This technique is similar to the smoke generators used by military forces to create smoke screens, and is generally only used outdoors due to the volume of fog produced and the petroleum fuel required. For theatrical purposes the pesticide is typically replaced with glycol, glycol/water mixtures, or water.
"Low-lying" fog effects can be created by combining a fog machine with another device designed specifically for this purpose. As the fog exits the fog machine it is chilled, either by passing through a device containing a fan and ice, or by passing through a device containing a fan and compressor similar to an air conditioner. The result is a relatively thick fog that stays within a few feet of the ground. As the fog warms, or is agitated, it rises and dissipates. Several manufacturers of theatrical fog fluid have developed specially formulated mixtures specifically designed to be used with , intended to provide thicker, more consistent fog effects. Although these chilling devices do not use carbon dioxide, the specially formulated fog fluid does create denser fog than regular fog fluid.
Haze
Haze effects refer to creating an unobtrusive, homogeneous cloud intended primarily to reveal lighting beams, such as "light fingers" in a rock concert. This effect is produced using a haze machine, typically done in one of two ways. One technique uses mineral oil, atomized via a spray pump powered either by electricity or compressed CO2, breaking the mineral oil into a fine mist. Another technique for creating haze uses a glycol/water mixture to create haze in a process nearly identical to that for creating fog effects. In either case the fluid used is referred to as haze fluid, but the different formulations are not compatible or interchangeable. Glycol/water haze fluid is sometimes referred to as "water based haze" to avoid ambiguity.
Smaller volumes of haze can also be generated from aerosol canisters containing mineral oil under pressure. Although the density of haze generated and the volume of space that can be filled is significantly smaller than that of a haze machine, aerosol canisters have the advantages of portability, no requirements for electricity and finer control over the volume of haze generated.
Carbon dioxide and dry ice
Liquid carbon dioxide (CO2), stored in compressed cylinders, is used in conjunction with theatrical fog machines to produce "low-lying" fog effects. When liquid CO2 is used to chill theatrical fog, the result is a thick fog that stays within a few feet of the ground. As the fog warms, or is agitated, it rises and dissipates. Several manufacturers of theatrical fog fluid have developed specially formulated mixtures specifically designed to be used with CO2, intended to provide thicker, more consistent fog effects. Effect duration is determined by the heating cycle of the theatrical fog machine and consumption rate of liquid CO2.
A large billowing fog plumes are created from the condensation of liquid that dry ice is submerged into. As dry ice is submerged into a bulk of liquid, the pure CO2 gas bubbles are formed, then the bulk liquid molecules start to evaporate at the surface of the bubbles into the gas bubbles. The evaporated liquid molecules are later condensed within the bubbles creating a fog which lead to more evaporation of liquid molecules into gas bubbles based on LeChatelier ’s principle. The fog is released through an electric solenoid valve to control timing and duration. When the solenoid valve is closed, the fog rapidly disperses in the air, ending the effect nearly instantaneously. This effect can be used for a variety of applications, including simulating geysers of steam, in place of pyrotechnics, or to create an instant opaque wall for a reveal or disappearance during magic acts.
Dry ice (solid carbon dioxide) effects are produced by heating water to or near boiling in a suitable container (for example: a 55-gallon drum with water heater coils in it), and then dropping in one or more pieces of dry ice. Because carbon dioxide cannot exist as a liquid at atmospheric pressure, the dry ice sublimates and instantly produces a gas, condensing water vapour and creating a thick white fog. A fan placed at the top of the container directs the fog where it is needed, creating a rolling fog that lies low to the ground. As the submerged dry ice cools the water, the amount and duration of fog produced will be reduced, requiring "rest" periods to reheat the water.
Dry ice can also be used in conjunction with a fog machine to create a low-lying fog effect. Dry ice is placed inside an insulated container with an orifice at each end. Fog from a fog machine is pumped in one side of the container, and allowed to flow out the other end. Although this technique does allow an individual to create low-lying fog "on the cheap" (when compared to the cost of renting cylinders of liquid CO2 or watertight containers with integral heaters), the volume of low-lying fog produced is typically less, and is more susceptible to atmospheric disturbances.
Nitrogen
Liquid nitrogen (N2) is used to create low-lying fog effects in a manner similar to dry ice. A machine heats water to at or near the boiling point, creating steam and increasing the humidity in a closed container. When liquid nitrogen is pumped into the container, the moisture rapidly condenses, creating a thick white fog. A fan placed at the output of the container directs the fog where it is needed, creating a rolling fog that lies low to the ground. These types of machines are commonly referred to as "dry foggers" because the fog created by this method consists solely of water droplets, and as it dissipates there is little to no residue left on any surfaces. Dry Fogger is also a trademarked name for a particular brand of this style of fog machine. Liquid air can be used instead of nitrogen.
Historical usage
The Globe Theatre (1598–1613) reportedly used smoke effects during performances for atmosphere and special effects.
On 23 March 1934, Adelaide Hall opened at Harlem's Cotton Club in The Cotton Club Parade 24th Edition. In the show Hall introduced the song "Ill Wind", which Harold Arlen and Ted Koehler wrote especially for her. It was during Hall's rendition of "Ill Wind" that nitrogen smoke was used to cover the floor of the stage. It was the first time such an effect had ever been used on a stage and caused a sensation.
Smoke testing
When using smoke machines, a common test is to fill the venue to the full capacity with smoke to see if there are any smoke detectors still live, or if there are any leaks of smoke from the venue sufficient to set off detectors in other parts of the venue being tested. This practice is known as a smoke test.
Smoke machines are commonly used in the testing of larger HVAC systems to identify leaks in ducting, as well as to visualize air flow.
Awards
The techniques and technology for creating smoke and fog effects are continually evolving. The individuals who create and develop theatrical fog for use in the entertainment industry have received numerous recognition for their efforts.
Academy of Motion Picture Arts and Sciences
Technical achievement awards
On March 7, 1992, the Academy of Motion Picture Arts and Sciences presented a Technical Achievement Award to Jim Doyle for the design and development of the Dry Fogger, which uses liquid nitrogen to produce a safe, dense, low-hanging dry fog.
On February 28, 1998, the Academy of Motion Picture Arts and Sciences presented a Technical Achievement Award to James F. Foley (UCISCO); Charles Converse (UCISCO); F. Edward Gardner (UCISCO); Bob Stoker and Matt Sweeney for the development and realization of the Liquid Synthetic Air system.
On January 4, 2008, the Academy of Motion Picture Arts and Sciences presented a Technical Achievement Award to Jörg Pöhler and Rüdiger Kleinke of OTTEC Technology GmbH for the design and development of the battery-operated series of fog machines known as "Tiny Foggers."
Scientific and engineering award
On March 25, 1985, the Academy of Motion Picture Arts and Sciences presented a Scientific and Engineering Award to Günther Schaidt of Rosco Laboratories for the development of an improved, non-toxic fluid for creating fog and smoke for motion picture production.
Adverse health effects
Carbon dioxide
Unsafe concentrations of carbon dioxide can cause headaches, nausea, blurred vision, dizziness and shortness of breath. Higher concentrations will result in loss of consciousness and death due to suffocation. When using compressed carbon dioxide or dry ice, care should be taken to ensure there is sufficient ventilation available at all times, and that procedures are in place to rapidly evacuate CO2 from any enclosed space in an emergency.
Dry ice (−78.5 °C) presents a significant risk of frostbite if mishandled. Proper protective clothing, such as long sleeves and gloves, should always be worn when handling these products. Liquid carbon dioxide, (5 atmospheres; −56.6 °C), stored in compressed cylinders, also presents all the hazards attendant to materials under pressure and should be handled accordingly.
Liquid nitrogen
Nitrogen itself is relatively non-toxic, but in high concentrations it can displace oxygen, creating a suffocation hazard. Liquid nitrogen (−195.8 °C) presents a significant risk of frostbite or cold burn if mishandled. Proper protective clothing, such as long sleeves and gloves, should always be worn when handling these products. Liquid nitrogen is stored in compressed cylinders, and therefore presents all the hazards attendant to materials under pressure and should be handled accordingly.
Theatrical fog and artificial mists
A number of studies have been published on the potential health effects presented by exposure to theatrical fogs and artificial mists.
The first study that was done by Consultech Engineering, Co. under contract to Actor's Equity. The findings of the Consultech study were confirmed by two additional studies—a Health Hazard Evaluation completed in 1994 by the National Institute for Occupational Safety and Health, and another one in 2000 by the Department of Community and Preventative Medicine at the Mount Sinai School of Medicine and ENVIRON; both prepared for Actors Equity and the League of American Theatres and Producers, focused on the effects on actors and performers in Broadway musicals. The conclusion of all three studies was that there was irritation of mucous membranes such as the eyes and the respiratory tract associated with extended peak exposure to theatrical fog. Exposure guidelines were outlined in the 2000 study that, it was determined, should prevent actors from suffering adverse impact to their health or vocal abilities.
Another study focused on the use of theatrical fog in the commercial aviation industry for emergency training of staff in simulated fire conditions. This study that found eye and respiratory tract irritation can occur.
In May 2005, a study published in the American Journal of Industrial Medicine, conducted by the School of Environment and Health at the University of British Columbia, looked at adverse respiratory effects in crew members on a wide variety of entertainment venues ranging from live theatres, concerts, television and film productions to a video arcade. This study determined that cumulative exposure to mineral oil and glycol-based fogs was associated with acute and chronic adverse effects on respiratory health. This study found that short-term exposure to glycol fog was associated with coughing, dry throat, headaches, dizziness, drowsiness, and tiredness. This study also found long-term exposure to smoke and fog was associated with both short-term and long-term respiratory problems such as chest tightness and wheezing. Personnel working closest to the fog machines had reduced lung function results.
The Entertainment Services and Technology Association has compiled a standard for theatrical fogs or artificial mists compositions for use in entertainment venues that "are not likely to be harmful to otherwise healthy performers, technicians, or audience members of normal working age, which is 18 to 64 years of age, inclusive." This standard was based (though not exclusively), upon the findings of a CIH literature studies commissioned by ESTA and applies only those fog fluid compositions that consist of a mixture of water and glycol and glycerin (so called "water based" fog fluid).
Short term exposure to glycol fog can be associated with headaches, dizziness, drowsiness and tiredness. Long term exposure to smoke and fog can be related to upper airway and voice symptoms. Extended (multi-year) exposure to smoke and fog has been associated with both short-term and long-term respiratory health problems. Efforts should be made to reduce exposure to theatrical smoke to as low a level as possible. The use of digital effects in post production on film and television sets can be considered a safer practice than using theatrical smoke and fog during filming, although this is not always practical.
See also
Fog machine
Haze machine
References
External links
Theatre Effects U.S. – Many Fog FAQs Found Within This Site
Rosco U.S.A. – How Fog Machines Work
Ontario Ministry of Labour – Fog and Smoke Safety Guideline for the Live Performance Industry
– Fog Film Special Effects
Fog
Scenic design
Smoke | Theatrical smoke and fog | [
"Physics",
"Engineering"
] | 3,200 | [
"Visibility",
"Fog",
"Physical quantities",
"Scenic design",
"Design"
] |
9,292,690 | https://en.wikipedia.org/wiki/Risk%20dominance | Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
The payoff matrix in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a well-known game-theoretic dilemma called stag hunt. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible commitments.
Formal definition
The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only pure Nash equilibria. In addition there is a mixed Nash equilibrium where player 1 plays H with probability p = (d-c)/(a-b-c+d) and G with probability 1–p; player 2 plays H with probability q = (D-C)/(A-B-C+D) and G with probability 1–q.
Strategy pair (H, H) payoff dominates (G, G) if A ≥ D, a ≥ d, and at least one of the two is a strict inequality: A > D or a > d.
Strategy pair (G, G) risk dominates (H, H) if the product of the deviation losses is higher for (G, G) (Harsanyi and Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: . If the inequality is strict then (G, G) strictly risk dominates (H, H).(That is, players have more incentive to deviate).
If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from playing G exceeds the expected payoff from playing H: , or simply .
Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff to a player if they play H: (where p is the probability that the other player will play H), and compare it to the expected payoff if they play G: . The value of p which makes these two expected values equal is the risk factor for the equilibrium (H, H), with the risk factor for playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting p as the probability the other player will play G. An interpretation for p is it is the smallest probability that the opponent must play that strategy such that the person's own payoff from copying the opponent's strategy is greater than if the other strategy was played.
Equilibrium selection
A number of evolutionary approaches have established that when played in a large population, players might fail to play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more likely to occur. The first model, based on replicator dynamics, predicts that a population is more likely to adopt the risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on best response strategy revision and mutation, predicts that the risk dominant state is the only stochastically stable equilibrium. Both models assume that multiple two-player games are played in a population of N players. The players are matched randomly with opponents, with each player having equal likelihoods of drawing any of the N−1 other players. The players start with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population game is repeated in sequential generations where subpopulations change based on the success of their chosen strategies. In best response, players update their strategies to improve expected payoffs in the subsequent generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update one's strategy allows for mutation, and the probability of mutation vanishes, i.e. asymptotically reaches zero over time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated.
Notes
A single Nash equilibrium is trivially payoff and risk dominant if it is the only NE in the game.
Similar distinctions between strict and weak exist for most definitions here, but are not denoted explicitly unless necessary.
Harsanyi and Selten (1988) propose that the payoff dominant equilibrium is the rational choice in the stag hunt game, however Harsanyi (1995) retracted this conclusion to take risk dominance as the relevant selection criterion.
References
Samuel Bowles: Microeconomics: Behavior, Institutions, and Evolution, Princeton University Press, pp. 45–46 (2004)
Drew Fudenberg and David K. Levine: The Theory of Learning in Games, MIT Press, p. 27 (1999)
John C. Harsanyi: "A New Theory of Equilibrium Selection for Games with Complete Information", Games and Economic Behavior 8, pp. 91–122 (1995)
John C. Harsanyi and Reinhard Selten: A General Theory of Equilibrium Selection in Games, MIT Press (1988)
Michihiro Kandori, George J. Mailath & Rafael Rob: "Learning, Mutation, and Long-run Equilibria in Games", Econometrica 61, pp. 29–56 (1993) Abstract
Roger B. Myerson: Game Theory, Analysis of Conflict, Harvard University Press, pp. 118–119 (1991)
Larry Samuelson: Evolutionary Games and Equilibrium Selection, MIT Press (1997)
H. Peyton Young: "The Evolution of Conventions", Econometrica, 61, pp. 57–84 (1993) Abstract
H. Peyton Young: Individual Strategy and Social Structure, Princeton University Press (1998)
Game theory equilibrium concepts
Evolutionary game theory | Risk dominance | [
"Mathematics"
] | 1,624 | [
"Game theory",
"Game theory equilibrium concepts",
"Evolutionary game theory"
] |
9,292,749 | https://en.wikipedia.org/wiki/Forward%E2%80%93backward%20algorithm | The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions , i.e. it computes, for all hidden state variables , the distribution . This inference task is usually called smoothing. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name forward–backward algorithm.
The term forward–backward algorithm is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.
Overview
In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:
The last step follows from an application of the Bayes' rule and the conditional independence of and given .
As outlined above, the algorithm involves three steps:
computing forward probabilities
computing backward probabilities
computing smoothed values.
The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the message-passing used in general belief propagation approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results.
The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see Viterbi algorithm).
Forward probabilities
The following description will use matrices of probability values rather than probability distributions, although in general the forward-backward algorithm can be applied to continuous as well as discrete probability models.
We transform the probability distributions related to a given hidden Markov model into matrix notation as follows.
The transition probabilities of a given random variable representing all possible states in the hidden Markov model will be represented by the matrix where the column index will represent the target state and the row index represents the start state. A transition from row-vector state to the incremental row-vector state is written as . The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then:
In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form:
provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system (), the probability of observing event j is then:
The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector () with an observation matrix () containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be:
This allows us to calculate the new unnormalized probabilities state vector through Bayes rule, weighting by the likelihood that each element of generated event 1 as:
We can now make this general procedure specific to our series of observations. Assuming an initial state vector , (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin with , then updating the state distribution and weighting by the likelihood of the first observation:
This process can be carried forward with additional observations using:
This value is the forward unnormalized probability vector. The i'th entry of this vector provides:
Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that:
where represents the scaled vector from the previous step and represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states:
This allows us to interpret the scaled probability vector as:
We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time.
Backward probabilities
A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities:
That is, we now want to assume that we start in a particular state (), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with:
Notice that we are now using a column vector while the forward probabilities used row vectors. We can then work backwards using:
While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the same constants used in the forward probability calculations. is not scaled, but subsequent operations use:
where represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by:
This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values:
To understand this, we note that provides the probability for observing the given events in a way that passes through state at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability that . These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability.
The values thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e. . The most probable sequence of states that produced an observation sequence can be found using the Viterbi algorithm.
Example
This example takes as its basis the umbrella world in Russell & Norvig 2010 Chapter 15 pp. 567 in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then:
We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix:
We then observe the following sequence of events: {umbrella, umbrella, no umbrella, umbrella, umbrella} which we will represent in our calculations as:
Note that differs from the others because of the "no umbrella" observation.
In computing the forward probabilities we begin with:
which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form:
instead of:
Notice that the transformation matrix is also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides:
For the backward probabilities, we start with:
We are then able to compute (using the observations in reverse order and normalizing with different constants):
Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with the 's found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time.
Notice that the value of is equal to and that is equal to . This follows naturally because both and begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However, will only be equal to when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the case needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points.
The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value at quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella.
Performance
The forward–backward algorithm runs with time complexity in space , where is the length of the time sequence and is the number of symbols in the state alphabet. The algorithm can also run in constant space with time complexity by recomputing values at each step. For comparison, a brute-force procedure would generate all possible state sequences and calculate the joint probability of each state sequence with the observed series of events, which would have time complexity . Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high.
An enhancement to the general forward-backward algorithm, called the Island algorithm, trades smaller memory usage for longer running time, taking time and memory. Furthermore, it is possible to invert the process model to obtain an space, time algorithm, although the inverted process may not exist or be ill-conditioned.
In addition, algorithms have been developed to compute efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm.
Pseudocode
algorithm forward_backward is
input: guessState
int sequenceIndex
output: result
if sequenceIndex is past the end of the sequence then
return 1
if (guessState, sequenceIndex) has been seen before then
return saved result
result := 0
for each neighboring state n:
result := result + (transition probability from guessState to
n given observation element at sequenceIndex)
× Backward(n, sequenceIndex + 1)
save result for (guessState, sequenceIndex)
return result
Python example
Given HMM (just like in Viterbi algorithm) represented in the Python programming language:
states = ('Healthy', 'Fever')
end_state = 'E'
observations = ('normal', 'cold', 'dizzy')
start_probability = {'Healthy': 0.6, 'Fever': 0.4}
transition_probability = {
'Healthy' : {'Healthy': 0.69, 'Fever': 0.3, 'E': 0.01},
'Fever' : {'Healthy': 0.4, 'Fever': 0.59, 'E': 0.01},
}
emission_probability = {
'Healthy' : {'normal': 0.5, 'cold': 0.4, 'dizzy': 0.1},
'Fever' : {'normal': 0.1, 'cold': 0.3, 'dizzy': 0.6},
}
We can write the implementation of the forward-backward algorithm like this:
def fwd_bkw(observations, states, start_prob, trans_prob, emm_prob, end_st):
"""Forward–backward algorithm."""
# Forward part of the algorithm
fwd = []
for i, observation_i in enumerate(observations):
f_curr = {}
for st in states:
if i == 0:
# base case for the forward part
prev_f_sum = start_prob[st]
else:
prev_f_sum = sum(f_prev[k] * trans_prob[k][st] for k in states)
f_curr[st] = emm_prob[st][observation_i] * prev_f_sum
fwd.append(f_curr)
f_prev = f_curr
p_fwd = sum(f_curr[k] * trans_prob[k][end_st] for k in states)
# Backward part of the algorithm
bkw = []
for i, observation_i_plus in enumerate(reversed(observations[1:] + (None,))):
b_curr = {}
for st in states:
if i == 0:
# base case for backward part
b_curr[st] = trans_prob[st][end_st]
else:
b_curr[st] = sum(trans_prob[st][l] * emm_prob[l][observation_i_plus] * b_prev[l] for l in states)
bkw.insert(0,b_curr)
b_prev = b_curr
p_bkw = sum(start_prob[l] * emm_prob[l][observations[0]] * b_curr[l] for l in states)
# Merging the two parts
posterior = []
for i in range(len(observations)):
posterior.append({st: fwd[i][st] * bkw[i][st] / p_fwd for st in states})
assert p_fwd == p_bkw
return fwd, bkw, posterior
The function fwd_bkw takes the following arguments:
x is the sequence of observations, e.g. ['normal', 'cold', 'dizzy'];
states is the set of hidden states;
a_0 is the start probability;
a are the transition probabilities;
and e are the emission probabilities.
For simplicity of code, we assume that the observation sequence x is non-empty and that a[i][j] and e[i][j] is defined for all states i,j.
In the running example, the forward-backward algorithm is used as follows:
def example():
return fwd_bkw(observations,
states,
start_probability,
transition_probability,
emission_probability,
end_state)
>>> for line in example():
... print(*line)
...
{'Healthy': 0.3, 'Fever': 0.04000000000000001} {'Healthy': 0.0892, 'Fever': 0.03408} {'Healthy': 0.007518, 'Fever': 0.028120319999999997}
{'Healthy': 0.0010418399999999998, 'Fever': 0.00109578} {'Healthy': 0.00249, 'Fever': 0.00394} {'Healthy': 0.01, 'Fever': 0.01}
{'Healthy': 0.8770110375573259, 'Fever': 0.1229889624426741} {'Healthy': 0.623228030950954, 'Fever': 0.3767719690490461} {'Healthy': 0.2109527048413057, 'Fever': 0.7890472951586943}
See also
Baum–Welch algorithm
Viterbi algorithm
BCJR algorithm
References
Lawrence R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77 (2), p. 257–286, February 1989. 10.1109/5.18626
External links
An interactive spreadsheet for teaching the forward–backward algorithm (spreadsheet and article with step-by-step walk-through)
Tutorial of hidden Markov models including the forward–backward algorithm
Collection of AI algorithms implemented in Java (including HMM and the forward–backward algorithm)
Dynamic programming
Error detection and correction
Machine learning algorithms
Markov models | Forward–backward algorithm | [
"Engineering"
] | 3,941 | [
"Error detection and correction",
"Reliability engineering"
] |
9,292,969 | https://en.wikipedia.org/wiki/2007%20Siberian%20orange%20snow | The Siberian orange snow of 2007 was an anomalous phenomenon that occurred in early February 2007. Beginning on 31 January 2007, an orange-tinted snow fell across an area of in Omsk Oblast, Siberian Federal District, Russia, approximately from Moscow, as well as into the neighboring Tomsk and Tyumen Oblasts. It was unclear what caused the orange snow. Speculation ranged from pollutants to a sandstorm in neighboring Kazakhstan.
Description
The orange snow was malodorous, oily to the touch, and reported to contain four times the normal level of iron. Though mostly orange, some of the snow was red or yellow. It affected an area with about 27,000 residents. It was originally speculated that it was caused by industrial pollution, a rocket launch or even a nuclear accident. It was later determined that the snow was non-toxic; however, people in the region were advised not to use the snow or allow animals to feed upon it. Colored snow is uncommon in Russia but not unheard of, as there have been many cases of black, blue, green and red snowfall.
Possible causes
This orange snow may have been caused by a heavy sandstorm in neighboring Kazakhstan. Tests on the snow revealed numerous sand and clay dust particles, which were blown into Russia in the upper stratosphere. The speculation that the coloration was caused by a rocket launch from Baikonur in Kazakhstan was later dismissed, as the last launch before the event took place on 18 January 2007.
Russia's environmental watchdog originally claimed that the colored snowfall was caused by industrial pollution, such as "waste from metallurgical plants." It stated that the snow contained four times the normal quantities of acids, nitrates, and iron. However, it would be nearly impossible to pinpoint a culprit if pollution were the cause, as there are various industries nearby, such as the city of Omsk, which is a center of the oil industry in Russia.
See also
References
Siberian Orange Snow, 2007
Siberian Orange Snow, 2007
Anomalous weather
History of Omsk Oblast
History of Siberia
Snow | 2007 Siberian orange snow | [
"Physics"
] | 421 | [
"Weather",
"Physical phenomena",
"Anomalous weather"
] |
9,293,352 | https://en.wikipedia.org/wiki/Carnivorous%20fungus | A carnivorous fungus or predaceous fungus is a fungus that derives some or most of its nutrients from trapping and eating microscopic or other minute animals. More than 200 species have been described, belonging to the phyla Ascomycota, Mucoromycotina, and Basidiomycota. They usually live in soil and many species trap or stun nematodes (nematophagous fungus), while others attack amoebae or collembola.
Fungi that grow on the epidermis, hair, skin, nails, scales or feathers of living or dead animals are considered to be dermatophytes rather than carnivores. Similarly, fungi in orifices and the digestive tract of animals are not carnivorous, and neither are internal pathogens. Neither are insect pathogens that stun and colonize insects normally labelled carnivorous if the fungal thallus is mainly in the insect as does Cordyceps, or if it clings to the insect like the Laboulbeniales. All of these are examples of parasitism or scavenging.
Two basic trapping mechanisms have been observed in carnivorous fungi that are predatory on nematodes:
constricting rings (active traps)
adhesive structures (passive traps)
Sequencing of ribosomal DNA has shown that these trap types occur in separate fungus lineages, an example of convergent evolution.
See also
Carnivorous plant
Predatory dinoflagellate
Protocarnivorous plant
References
Hauser, J.T. 1985. Carnivorous Plant Newsletter 14(1): 8-11. [reprinted from Carolina Tips, Carolina Biological Supply Company]
External links
Nematode Destroying Fungi
Fungus | Carnivorous fungus | [
"Biology"
] | 345 | [
"Eating behaviors",
"Carnivory"
] |
9,293,603 | https://en.wikipedia.org/wiki/Mouth | The mouth is the body orifice through which many animals ingest food and vocalize. The body cavity immediately behind the mouth opening, known as the oral cavity (or in Latin), is also the first part of the alimentary canal, which leads to the pharynx and the gullet. In tetrapod vertebrates, the mouth is bounded on the outside by the lips and cheeks — thus the oral cavity is also known as the buccal cavity (from Latin , meaning "cheek") — and contains the tongue on the inside. Except for some groups like birds and lissamphibians, vertebrates usually have teeth in their mouths, although some fish species have pharyngeal teeth instead of oral teeth.
Most bilaterian phyla, including arthropods, molluscs and chordates, have a two-opening gut tube with a mouth at one end and an anus at the other. Which end forms first in ontogeny is a criterion used to classify bilaterian animals into protostomes and deuterostomes.
Development
In the first multicellular animals, there was probably no mouth or gut and food particles were engulfed by the cells on the exterior surface by a process known as endocytosis. The particles became enclosed in vacuoles into which enzymes were secreted and digestion took place intracellularly. The digestive products were absorbed into the cytoplasm and diffused into other cells. This form of digestion is used nowadays by simple organisms such as Amoeba and Paramecium and also by sponges which, despite their large size, have no mouth or gut and capture their food by endocytosis.
However, most animals have a mouth and a gut, the lining of which is continuous with the epithelial cells on the surface of the body. A few animals which live parasitically originally had guts but have secondarily lost these structures. The original gut of diploblastic animals probably consisted of a mouth and a one-way gut. Some modern invertebrates still have such a system: food being ingested through the mouth, partially broken down by enzymes secreted in the gut, and the resulting particles engulfed by the other cells in the gut lining. Indigestible waste is ejected through the mouth.
In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the formation of the gut. In deuterostomes, the blastopore becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. In the protostomes, it used to be thought that the blastopore formed the mouth (proto– meaning "first") while the anus formed later as an opening made by the other end of the gut. More recent research, however, shows that in protostomes the edges of the slit-like blastopore close up in the middle, leaving openings at both ends that become the mouth and anus.
Anatomy
Invertebrates
Apart from sponges and placozoans, almost all animals have an internal gut cavity, which is lined with gastrodermal cells. In less advanced invertebrates such as the sea anemone, the mouth also acts as an anus. Circular muscles around the mouth are able to relax or contract in order to open or close it. A fringe of tentacles thrusts food into the cavity and it can gape widely enough to accommodate large prey items. Food passes first into a pharynx and digestion occurs extracellularly in the gastrovascular cavity. Annelids have simple tube-like guts, and the possession of an anus allows them to separate the digestion of their foodstuffs from the absorption of the nutrients.
Many molluscs have a radula, which is used to scrape microscopic particles off surfaces. In invertebrates with hard exoskeletons, various mouthparts may be involved in feeding behaviour. Insects have a range of mouthparts suited to their mode of feeding. These include mandibles, maxillae and labium and can be modified into suitable appendages for chewing, cutting, piercing, sponging and sucking. Decapods have six pairs of mouth appendages, one pair of mandibles, two pairs of maxillae and three of maxillipeds. Sea urchins have a set of five sharp calcareous plates, which are used as jaws and are known as Aristotle's lantern.
Vertebrates
In vertebrates, the first part of the digestive system is the buccal cavity, commonly known as the mouth. The buccal cavity of a fish is separated from the opercular cavity by the gills. Water flows in through the mouth, passes over the gills and exits via the operculum or gill slits. Nearly all fish have jaws and may seize food with them but most feed by opening their jaws, expanding their pharynx and sucking in food items. The food may be held or chewed by teeth located in the jaws, on the roof of the mouth, on the pharynx or on the gill arches.
Nearly all amphibians are carnivorous as adults. Many catch their prey by flicking out an elongated tongue with a sticky tip and drawing it back into the mouth, where they hold the prey with their jaws. They then swallow their food whole without much chewing. They typically have many small hinged pedicellate teeth, the bases of which are attached to the jaws, while the crowns break off at intervals and are replaced. Most amphibians have one or two rows of teeth in both jaws but some frogs lack teeth in the lower jaw. In many amphibians, there are also vomerine teeth attached to the bone in the roof of the mouth.
The mouths of reptiles are largely similar to those of mammals. The crocodilians are the only reptiles to have teeth anchored in sockets in their jaws. They are able to replace each of their approximately 80 teeth up to 50 times during their lives. Most reptiles are either carnivorous or insectivorous, but turtles are often herbivorous. Lacking teeth that are suitable for efficiently chewing of their food, turtles often have gastroliths in their stomach to further grind the plant material. Snakes have a very flexible lower jaw, the two halves of which are not rigidly attached, and numerous other joints in their skull. These modifications allow them to open their mouths wide enough to swallow their prey whole, even if it is wider than they are.
Birds do not have teeth, relying instead on other means of gripping and macerating their food. Their beaks have a range of sizes and shapes according to their diet and are composed of elongated mandibles. The upper mandible may have a nasofrontal hinge allowing the beak to open wider than would otherwise be possible. The exterior surface of beaks is composed of a thin, horny sheath of keratin. Nectar feeders such as hummingbirds have specially adapted brushy tongues for sucking up nectar from flowers.
In mammals, the buccal cavity is typically roofed by the hard and soft palates, floored by the tongue and surrounded by the cheeks, salivary glands, and upper and lower teeth. The upper teeth are embedded in the upper jaw and the lower teeth in the lower jaw, which articulates with the temporal bones of the skull. The lips are soft and fleshy folds which shape the entrance into the mouth. The buccal cavity empties through the pharynx into the oesophagus.
Other functions of the mouth
Crocodilians living in the tropics can gape with their mouths to provide cooling by evaporation from the mouth lining. Some mammals rely on panting for thermoregulation as it increases evaporation of water across the moist surfaces of the lungs, the tongue and mouth. Birds also avoid overheating by gular fluttering, flapping the wings near the gular (throat) skin, similar to panting in mammals.
Various animals use their mouths in threat displays. They may gape widely, exhibit their teeth prominently, or flash the startling colours of the mouth lining. This display allows each potential combatant an opportunity to assess the weapons of their opponent and lessens the likelihood of actual combat being necessary.
A number of species of bird use a gaping, open beak in their fear and threat displays. Some augment the display by hissing or breathing heavily, while others clap their beaks.
Mouths are also used as part of the mechanism for producing sounds for communication. To produce sounds, air is forced from the lungs over vocal cords in the larynx. In humans, the pharynx, soft palate, hard palate, alveolar ridge, tongue, teeth and lips are termed articulators and play their part in the production of speech. Varying the position of the tongue in relation to the other articulators or moving the lips restricts the airflow from the lungs in different ways and changes the mouth's resonating properties, producing a range of different sounds. In frogs, the sounds can be amplified using sacs in the throat region. The vocal sacs can be inflated and deflated and act as resonators to transfer the sound to the outside world. A bird's song is produced by the flow of air over a vocal organ at the base of the trachea, the syrinx. For each burst of song, the bird opens its beak and closes it again afterwards. The beak may move slightly and may contribute to the resonance but the song originates elsewhere.
See also
Oral manifestations of systemic disease
References
External links
Human head and neck
Animal anatomy
Digestive system
Facial features | Mouth | [
"Biology"
] | 2,011 | [
"Digestive system",
"Organ systems"
] |
9,295,203 | https://en.wikipedia.org/wiki/Mayo%E2%80%93Lewis%20equation | The Mayo–Lewis equation or copolymer equation in polymer chemistry describes the distribution of monomers in a copolymer. It was proposed by Frank R. Mayo and Frederick M. Lewis.
The equation considers a monomer mix of two components and and the four different reactions that can take place at the reactive chain end terminating in either monomer ( and ) with their reaction rate constants :
The reactivity ratio for each propagating chain end is defined as the ratio of the rate constant for addition of a monomer of the species already at the chain end to the rate constant for addition of the other monomer.
The copolymer equation is then:
with the concentrations of the components in square brackets. The equation gives the relative instantaneous rates of incorporation of the two monomers.
Equation derivation
Monomer 1 is consumed with reaction rate:
with the concentration of all the active chains terminating in monomer 1, summed over chain lengths. is defined similarly for monomer 2.
Likewise the rate of disappearance for monomer 2 is:
Division of both equations by followed by division of the first equation by the second yields:
The ratio of active center concentrations can be found using the steady state approximation, meaning that the concentration of each type of active center remains constant.
The rate of formation of active centers of monomer 1 () is equal to the rate of their destruction () so that
or
Substituting into the ratio of monomer consumption rates yields the Mayo–Lewis equation after rearrangement:
Mole fraction form
It is often useful to alter the copolymer equation by expressing concentrations in terms of mole fractions. Mole fractions of monomers and in the feed are defined as and where
Similarly, represents the mole fraction of each monomer in the copolymer:
These equations can be combined with the Mayo–Lewis equation to give
This equation gives the composition of copolymer formed at each instant. However the feed and copolymer compositions can change as polymerization proceeds.
Limiting cases
Reactivity ratios indicate preference for propagation. Large indicates a tendency for to add , while small corresponds to a tendency for to add . Values of describe the tendency of to add or . From the definition of reactivity ratios, several special cases can be derived:
If both reactivity ratios are very high, the two monomers only react with themselves and not with each other. This leads to a mixture of two homopolymers.
. If both ratios are larger than 1, homopolymerization of each monomer is favored. However, in the event of crosspolymerization adding the other monomer, the chain-end will continue to add the new monomer and form a block copolymer.
. If both ratios are near 1, a given monomer will add the two monomers with comparable speeds and a statistical or random copolymer is formed.
If both values are near 0, the monomers are unable to homopolymerize. Each can add only the other resulting in an alternating polymer. For example, the copolymerization of maleic anhydride and styrene has reactivity ratios = 0.01 for maleic anhydride and = 0.02 for styrene. Maleic acid in fact does not homopolymerize in free radical polymerization, but will form an almost exclusively alternating copolymer with styrene.
In the initial stage of the copolymerization, monomer 1 is incorporated faster and the copolymer is rich in monomer 1. When this monomer gets depleted, more monomer 2 segments are added. This is called composition drift.
When both , the system has an azeotrope, where feed and copolymer composition are the same.
Calculation of reactivity ratios
Calculation of reactivity ratios generally involves carrying out several polymerizations at varying monomer ratios. The copolymer composition can be analysed with methods such as Proton nuclear magnetic resonance, Carbon-13 nuclear magnetic resonance, or Fourier transform infrared spectroscopy. The polymerizations are also carried out at low conversions, so monomer concentrations can be assumed to be constant. With all the other parameters in the copolymer equation known, and can be found.
Curve Fitting
One of the simplest methods for finding reactivity ratios is plotting the copolymer equation and using nonlinear least squares analysis to find the , pair that gives the best fit curve. This is preferred as methods such as Kelen-Tüdős or Fineman-Ross (see below) that involve linearization of the Mayo–Lewis equation will introduce bias to the results.
Mayo-Lewis Method
The Mayo-Lewis method uses a form of the copolymer equation relating to :
For each different monomer composition, a line is generated using arbitrary values. The intersection of these lines is the , for the system. More frequently, the lines do not intersect in a single point and the area in which most lines intersect can be given as a range of , and values.
Fineman-Ross Method
Fineman and Ross rearranged the copolymer equation into a linear form:
where and
Thus, a plot of versus yields a straight line with slope and intercept
Kelen-Tüdős method
The Fineman-Ross method can be biased towards points at low or high monomer concentration, so Kelen and Tüdős introduced an arbitrary constant,
where and are the highest and lowest values of from the Fineman-Ross method. The data can be plotted in a linear form
where and . Plotting against yields a straight line that gives when and when . This distributes the data more symmetrically and can yield better results.
Q-e scheme
A semi-empirical method for the prediction of reactivity ratios is called the Q-e scheme which was proposed by Alfrey and Price in 1947. This involves using two parameters for each monomer, and . The reaction of
radical with monomer is written as
while the reaction of radical with monomer is written as
Where P is a proportionality constant, Q is the measure of reactivity of monomer via resonance stabilization, and e is the measure of polarity of monomer (molecule or radical) via the effect of functional groups on vinyl groups. Using these definitions, and can be found by the ratio of the terms. An advantage of this system is that reactivity ratios can be found using tabulated Q-e values of monomers regardless of what the monomer pair is in the system.
External links
copolymers @zeus.plmsc.psu.edu Link
References
Polymer chemistry
Equations | Mayo–Lewis equation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,338 | [
"Equations",
"Mathematical objects",
"Materials science",
"Polymer chemistry"
] |
9,295,370 | https://en.wikipedia.org/wiki/Polyot%20%28rocket%29 | The Polyot (, flight) (Also known as Sputnik, GRAU index 11A59) was an interim orbital carrier rocket, built to test ASAT spacecraft. It was required as a stopgap after the cancellation of the UR-200 programme, but before the Tsyklon could enter service. Only two were ever launched, the first on 1 November 1963, and the last on 12 April 1964. Both of these flights were successful.
The rocket consisted of a core stage, and four boosters, which were taken from a Voskhod 11A57 rocket. It was capable of delivering a 1,400 kg payload into a 300 km by 59° Low Earth orbit.
It is a member of the R-7 family.
See also
Comparable rockets
Tsyklon
UR-200
Related developments
R-7 Semyorka
Vostok rocket
Voskhod rocket
Molniya rocket
Soyuz rocket
Associated spacecraft
ASAT
External links
Encyclopedia Astronautica entry
Space launch vehicles of the Soviet Union
R-7 (rocket family)
Vehicles introduced in 1963 | Polyot (rocket) | [
"Astronomy"
] | 216 | [
"Rocketry stubs",
"Astronomy stubs"
] |
9,296,238 | https://en.wikipedia.org/wiki/Chebyshev%E2%80%93Markov%E2%80%93Stieltjes%20inequalities | In mathematical analysis, the Chebyshev–Markov–Stieltjes inequalities are inequalities related to the problem of moments that were formulated in the 1880s by Pafnuty Chebyshev and proved independently by Andrey Markov and (somewhat later) by Thomas Jan Stieltjes. Informally, they provide sharp bounds on a measure from above and from below in terms of its first moments.
Formulation
Given m0,...,m2m-1 ∈ R, consider the collection C of measures μ on R such that
for k = 0,1,...,2m − 1 (and in particular the integral is defined and finite).
Let P0,P1, ...,Pm be the first m + 1 orthogonal polynomials with respect to μ ∈ C, and let ξ1,...ξm be the zeros of Pm. It is not hard to see that the polynomials P0,P1, ...,Pm-1 and the numbers ξ1,...ξm are the same for every μ ∈ C, and therefore are determined uniquely by m0,...,m2m-1.
Denote
.
Theorem For j = 1,2,...,m, and any μ ∈ C,
References
Theorems in analysis
Inequalities | Chebyshev–Markov–Stieltjes inequalities | [
"Mathematics"
] | 281 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
9,296,327 | https://en.wikipedia.org/wiki/Digital%20era%20governance | The first idea of a digital administrative law was born in Italy in 1978 by Giovanni Duni and was developed in 1991 with the name teleadministration.
In the public administration debate about new public management (NPM), the concept of digital era governance (or DEG) is claimed by Patrick Dunleavy, Helen Margetts and their co-authors as replacing NPM since around 2000 to 2005. DEG has three key elements: reintegration (bringing issues back into government control, like US airport security after 9/11); needs-based holism (reorganizing government around distinct client groups); and digitization (fully exploiting the potential of digital storage and Internet communications to transform governance). Digital era governance implies that public sector organizations are facing new challenges and rapidly changing information technologies and information systems.
Since the popularization of the theory, it has been applied and enriched through the empirical works, such as the case study done on Brunei's Information Department. The case study demonstrated that digital dividends that can be secured through the effective application of new technology in the digital governance process.
Management approaches for digital era governance
To create a better government by means of ICT, public sector organizations cannot rely on their traditional methods. Firstly, traditional public services often are fragmented, duplicative, and inconsistent across government. Secondly, silo-like organizational management then leads to more individual government offices that are less effective regarding the creation of public values. A reorientation towards more innovative approaches is necessary. The mere implementation of technological instruments however does not necessarily require a change of management. For an improved government it is necessary to go from traditional management approaches towards innovative approaches.
Collaborative management
One approach is a highly collaborative way of managing future policy implementations such as the development of proactive public services. Such proactive services require little to no action by the users and eliminate the "burden and confusion for citizens and businesses, who can now obtain services without dealing with bureaucracy". This new course of action on the one hand entails a significant transformation of government practices while on the other hand proactive or new services (e.g., automated assistants) might rely on third parties who are leveraging government information. This shift from government-provided towards third party services represents a distinct approach and forces public management to transfer some of their control by forming policy knowledge and resource networks.
Problem oriented governance
Another approach refers to a change of mindset. This means that management should consider the transformation from service-oriented governance towards problem-oriented governance which is an "approach to policy design and implementation that emphasizes the need for organizations to adapt their form and functioning to the nature of the public problems they seek to address". This will create a more holistic and efficient way of tackling citizens’ needs and future technological advances. Since the digital era management challenges are also about harmonizing "delivery-first, user-centric, agile work models while also satisfying, or alternatively, challenging, onerous hierarchical accountability requirements", public sector organizations require fundamental change of the inflexible culture of bureaucratic organizations, e.g., by establishing cross-functional problem-oriented teams.
See also
Cyberocracy
E-governance
E-government
Government by algorithm
References
Political science
Public administration
Public policy
Digital technology | Digital era governance | [
"Technology"
] | 666 | [
"Information and communications technology",
"Digital technology"
] |
9,296,998 | https://en.wikipedia.org/wiki/Andrussow%20process | The Andrussow process is the dominant industrial process for the production of hydrogen cyanide. It involves the reaction of methane, ammonia, and oxygen. The process is catalyzed by a platinum-rhodium alloy.
2 CH4 + 2 NH3 + 3 O2 → 2 HCN + 6 H2O
Hydrogen cyanide is highly valued for the production or acrylonitrile and adiponitrile, as well as alkali metal salts such as potassium cyanide.
Process details
This reaction is very exothermic. The change of enthalpy of this reaction is equal to -481.06 kJ. The heat provided by the main reaction serves as a catalyst for other side reactions.
CH4 + H2O → CO + 3 H2
2 CH4 + 3 O2 → 2 CO + 4 H2O
4 NH3 + 3 O2 → 2 N2 + 6 H2O
These side reactions can be minimized by only short exposures to the catalyst of the order of 0.0003 s.
Historical articles
The process is based on a reaction that was discovered by Leonid Andrussow in 1927. In the following years he developed the process that is named after him. HCN is also produced in the BMA process.
References
Organic redox reactions
Industrial processes
Catalysis
Name reactions | Andrussow process | [
"Chemistry"
] | 279 | [
"Catalysis",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Chemical kinetics"
] |
9,297,009 | https://en.wikipedia.org/wiki/Phillip%20Burton%20Federal%20Building | The Phillip Burton Federal Building & United States Courthouse is a massive 21 floor, federal office building located at 450 Golden Gate Avenue near San Francisco's Civic Center and the San Francisco City Hall. The building occupies an entire city block, bounded by Golden Gate Avenue at the south, Turk Street at the north, Polk Street at the west, and Larkin Street at the east.
Designed by the local architectural firm of John Carl Warnecke and Associates in the International Style, construction was completed in 1964.
It serves as one of four courthouses for the United States District Court for the Northern District of California. The building was finished in 1964, one of the earliest office towers for San Francisco. It is named for former U.S. Representative Phillip Burton.
Occupants
Cafe 450 – 2nd Fl.
Federal Bureau of Investigation San Francisco Field Office – 13th Fl.
Northern California Regional Intelligence Center - NCRIC & Northern California High Intensity Drug Trafficking Area - NC HIDTA San Francisco – 14th Fl.
Federal Public Defender – 19th Fl.
Internal Revenue Service Help Center – 1st Fl.
U.S. Army Corps of Engineers
U.S. Attorney's Office – 11th and 9th Fl.
U.S. Department of Justice Antitrust Division – 10th Fl.
U.S. District Court for the Northern District of California
U.S. Marshals Service – 20th Fl.
San Francisco Passport Agency – 3rd Fl.
U.S. Pretrial Services – 18th Fl.
U.S. Probation – 17th Fl.
See also
List of tallest buildings in San Francisco
References
Civic Center, San Francisco
Skyscraper office buildings in San Francisco
Government buildings completed in 1959
Leadership in Energy and Environmental Design certified buildings | Phillip Burton Federal Building | [
"Engineering"
] | 335 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
9,297,256 | https://en.wikipedia.org/wiki/Stephen%20Suomi | Stephen J. Suomi is chief of the Laboratory of Comparative Ethology at the National Institute of Child Health and Human Development (NICHD) in Bethesda, Maryland. He is also a research professor at the University of Virginia, the University of Maryland, and Johns Hopkins University. He is involved with the Experience-based Brain & Biological Development Program, launched in 2003 by the Canadian Institute for Advanced Research.
Suomi was elected as a Fellow of the American Association for the Advancement of Science for his contributions to the understanding of how socialization affects the psychological development of non-human primates. He worked in the early 1970s as a research assistant to psychologist Harry Harlow, showing that it was possible to rehabilitate rhesus monkeys that had been reared in social isolation for the first six months of life by temporarily housing them with socially normal monkeys. At the University of Wisconsin-Madison Suomi worked with Harry Harlow to develop the pit of despair, a series of controversial and widely condemned experiments on baby monkeys that have been credited by some researchers as starting the animal liberation movement in the United States. Suomi has made no mention of the morality of his work.
Education and career
Suomi received a B.A. in psychology from Stanford University in 1968, and a Ph.D. in the same subject from the University of Wisconsin–Madison in 1971. He became a full professor with the university's psychology department in 1984, and began to work for the NICHD in 1983.
Work
Suomi describes his current research interests as focusing on the role of genetic and environmental factors in shaping individual psychological development in non-human primates; the effect of change on psychological development; and whether findings on monkeys in captivity can translate to monkeys living in the wild, and between human beings of different cultures.
In 2014, following a campaign by PETA, Suomi was criticized by members of the U.S. Congress for maternal deprivation experiments on monkeys. Both the American Psychological Association and the American Society of Primatologists defended Suomi's research as scientifically useful and ethically sound. However, in 2015, the National Institutes of Health (NIH) announced it would end monkey experiments for financial reasons, stressing that PETA's campaign "was not a factor in this decision". The following year, it announced it would review its policies on all primate research.
See also
Animal testing
Harry Harlow
Pit of despair
Notes
Further reading
Blum, Deborah. The Monkey Wars. Oxford University Press, 1994.
External links
Profile for Stephen Suomi at the Canadian Institute for Advanced Research (Wayback Machine copy)
"Good mothers stop monkeys going bad," by Andy Coghlan, New Scientist.
Ethologists
Animal testing in the United States
Living people
American people of Finnish descent
Fellows of the American Association for the Advancement of Science
Stanford University alumni
University of Wisconsin–Madison College of Letters and Science alumni
National Institutes of Health people
Year of birth missing (living people) | Stephen Suomi | [
"Biology"
] | 597 | [
"Ethology",
"Behavior",
"Ethologists"
] |
9,298,279 | https://en.wikipedia.org/wiki/Sono%20arsenic%20filter | The Sono arsenic filter was invented in 2006 by Abul Hussam, who is a chemistry professor at George Mason University (GMU) in Fairfax, Virginia. It was developed to deal with the problem of arsenic contamination of groundwater. The filter is now in use in Hussam's native Bangladesh.
Development
Farmers had been drinking fresh groundwater from wells, whereas previously they had had to use ponds and mudholes which were contaminated with bacteria and viruses. However, these wells were also contaminated with naturally occurring high concentrations of poisonous arsenic, causing skin ailments and cancers. Awareness of the problem developed through the 1990s.
Allan Smith, an epidemiologist at the University of California at Berkeley, observed that the arsenic problem affects millions of people worldwide:
Hussam developed his filter after years of testing hundreds of prototypes. The final version contains of shards of porous iron, which bonds chemically with arsenic. It also includes charcoal, sand and bits of brick. It filters nearly all arsenic from well water.
Awards
Hussam was awarded the 2007 Grainger challenge Prize for Sustainability by the National Academy of Engineering. Hussam plans to use 70% of the $1 million engineering prize to distribute filters to needy communities.
See also
Backwashing
Carbon filtering
Distillation
Filtration
Reverse osmosis
Sand separator
Settling basin
Water purification
References
External links
DWC-Water: Arsenic filtration - description and test results.
Invention description at GMU website
Manob Sakti Unnayan Kendro (MSUK) - development and distribution.
Bangladeshi inventions
Water filters
Arsenic
Water pollution | Sono arsenic filter | [
"Chemistry",
"Environmental_science"
] | 323 | [
"Water treatment",
"Water filters",
"Filters",
"Water pollution"
] |
9,299,409 | https://en.wikipedia.org/wiki/Nucleic%20acid%20thermodynamics | Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature (Tm) is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. Tm depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature.
Concepts
Hybridization
Hybridization is the process of establishing a non-covalent, sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex. Oligonucleotides, DNA, or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation, also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction. Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable.
Denaturation
DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrophobic stacking attractions between the bases. See Hydrophobic effect. Both terms are used to refer to the process as it occurs when a mixture is heated, although "denaturation" can also refer to the separation of DNA strands induced by chemicals like formamide or urea.
The process of DNA denaturation can be used to analyze some aspects of DNA. Because cytosine / guanine base-pairing is generally stronger than adenine / thymine base-pairing, the amount of cytosine and guanine in a genome is called its GC-content and can be estimated by measuring the temperature at which the genomic DNA melts. Higher temperatures are associated with high GC content.
DNA denaturation can also be used to detect sequence differences between two different DNA sequences. DNA is heated and denatured into single-stranded state, and the mixture is cooled to allow strands to rehybridize. Hybrid molecules are formed between similar sequences and any differences between those sequences will result in a disruption of the base-pairing. On a genomic scale, the method has been used by researchers to estimate the genetic distance between two species, a process known as DNA-DNA hybridization. In the context of a single isolated region of DNA, denaturing gradient gels and temperature gradient gels can be used to detect the presence of small mismatches between two sequences, a process known as temperature gradient gel electrophoresis.
Methods of DNA analysis based on melting temperature have the disadvantage of being proxies for studying the underlying sequence; DNA sequencing is generally considered a more accurate method.
The process of DNA melting is also used in molecular biology techniques, notably in the polymerase chain reaction. Although the temperature of DNA melting is not diagnostic in the technique, methods for estimating Tm are important for determining the appropriate temperatures to use in a protocol. DNA melting temperatures can also be used as a proxy for equalizing the hybridization strengths of a set of molecules, e.g. the oligonucleotide probes of DNA microarrays.
Annealing
Annealing, in genetics, means for complementary sequences of single-stranded DNA or RNA to pair by hydrogen bonds to form a double-stranded polynucleotide. Before annealing can occur, one of the strands may need to be phosphorylated by an enzyme such as kinase to allow proper hydrogen bonding to occur. The term annealing is often used to describe the binding of a DNA probe, or the binding of a primer to a DNA strand during a polymerase chain reaction. The term is also often used to describe the reformation (renaturation) of reverse-complementary strands that were separated by heat (thermally denatured). Proteins such as RAD52 can help DNA anneal. DNA strand annealing is a key step in pathways of homologous recombination. In particular, during meiosis, synthesis-dependent strand annealing is a major pathway of homologous recombination.
Stacking
Stacking is the stabilizing interaction between the flat surfaces of adjacent bases. Stacking can happen with any face of the base, that is 5'-5', 3'-3', and vice versa.
Stacking in "free" nucleic acid molecules is mainly contributed by intermolecular force, specifically electrostatic attraction among aromatic rings, a process also known as pi stacking. For biological systems with water as a solvent, hydrophobic effect contributes and helps in formation of a helix. Stacking is the main stabilizing factor in the DNA double helix.
Contribution of stacking to the free energy of the molecule can be experimentally estimated by observing the bent-stacked equilibrium in nicked DNA. Such stabilization is dependent on the sequence. The extent of the stabilization varies with salt concentrations and temperature.
Thermodynamics of the two-state model
Several formulas are used to calculate Tm values. Some formulas are more accurate in predicting melting temperatures of DNA duplexes. For DNA oligonucleotides, i.e. short sequences of DNA, the thermodynamics of hybridization can be accurately described as a two-state process. In this approximation one neglects the possibility of intermediate partial binding states in the formation of a double strand state from two single stranded oligonucleotides. Under this assumption one can elegantly describe the thermodynamic parameters for forming double-stranded nucleic acid AB from single-stranded nucleic acids A and B.
AB ↔ A + B
The equilibrium constant for this reaction is . According to the Van´t Hoff equation, the relation between free energy, ΔG, and K is ΔG° = -RTln K, where R is the ideal gas law constant, and T is the kelvin temperature of the reaction. This gives, for the nucleic acid system,
.
The melting temperature, Tm, occurs when half of the double-stranded nucleic acid has dissociated. If no additional nucleic acids are present, then [A], [B], and [AB] will be equal, and equal to half the initial concentration of double-stranded nucleic acid, [AB]initial. This gives an expression for the melting point of a nucleic acid duplex of
.
Because ΔG° = ΔH° -TΔS°, Tm is also given by
.
The terms ΔH° and ΔS° are usually given for the association and not the dissociation reaction (see the nearest-neighbor method for example). This formula then turns into:
, where [B]total ≤ [A]total.
As mentioned, this equation is based on the assumption that only two states are involved in melting: the double stranded state and the random-coil state. However, nucleic acids may melt via several intermediate states. To account for such complicated behavior, the methods of statistical mechanics must be used, which is especially relevant for long sequences.
Estimating thermodynamic properties from nucleic acid sequence
The previous paragraph shows how melting temperature and thermodynamic parameters (ΔG° or ΔH° & ΔS°) are related to each other. From the observation of melting temperatures one can experimentally determine the thermodynamic parameters. Vice versa, and important for applications, when the thermodynamic parameters of a given nucleic acid sequence are known, the melting temperature can be predicted. It turns out that for oligonucleotides, these parameters can be well approximated by the nearest-neighbor model.
Nearest-neighbor method
The interaction between bases on different strands depends somewhat on the neighboring bases. Instead of treating a DNA helix as a string of interactions between base pairs, the nearest-neighbor model treats a DNA helix as a string of interactions between 'neighboring' base pairs. So, for example, the DNA shown below has nearest-neighbor interactions indicated by the arrows.
↓ ↓ ↓ ↓ ↓
5' C-G-T-T-G-A 3'
3' G-C-A-A-C-T 5'
The free energy of forming this DNA from the individual strands, ΔG°, is represented (at 37 °C) as
ΔG°37(predicted) = ΔG°37(C/G initiation) + ΔG°37(CG/GC) + ΔG°37(GT/CA) + ΔG°37(TT/AA) + ΔG°37(TG/AC) + ΔG°37(GA/CT) + ΔG°37(A/T initiation)
Except for the C/G initiation term, the first term represents the free energy of the first base pair, CG, in the absence of a nearest neighbor. The second term includes both the free energy of formation of the second base pair, GC, and stacking interaction between this base pair and the previous base pair. The remaining terms are similarly defined. In general, the free energy of forming a nucleic acid duplex is
,
where represents the free energy associated with one of the ten possible the nearest-neighbor nucleotide pairs, and represents its count in the sequence.
Each ΔG° term has enthalpic, ΔH°, and entropic, ΔS°, parameters, so the change in free energy is also given by
.
Values of ΔH° and ΔS° have been determined for the ten possible pairs of interactions. These are given in Table 1, along with the value of ΔG° calculated at 37 °C. Using these values, the value of ΔG37° for the DNA duplex shown above is calculated to be −22.4 kJ/mol. The experimental value is −21.8 kJ/mol.
The parameters associated with the ten groups of neighbors shown in table 1 are determined from melting points of short oligonucleotide duplexes. It works out that only eight of the ten groups are independent.
The nearest-neighbor model can be extended beyond the Watson-Crick pairs to include parameters for interactions between mismatches and neighboring base pairs. This allows the estimation of the thermodynamic parameters of sequences containing isolated mismatches, like e.g. (arrows indicating mismatch)
↓↓↓
5' G-G-A-C-T-G-A-C-G 3'
3' C-C-T-G-G-C-T-G-C 5'
These parameters have been fitted from melting experiments and an extension of Table 1 which includes mismatches can be found in literature.
A more realistic way of modeling the behavior of nucleic acids would seem to be to have parameters that depend on the neighboring groups on both sides of a nucleotide, giving a table with entries like "TCG/AGC". However, this would involve around 32 groups for Watson-Crick pairing and even more for sequences containing mismatches; the number of DNA melting experiments needed to get reliable data for so many groups would be inconveniently high. However, other means exist to access thermodynamic parameters of nucleic acids: microarray technology allows hybridization monitoring of tens of thousands sequences in parallel. This data, in combination with molecular adsorption theory allows the determination of many thermodynamic parameters in a single experiment and to go beyond the nearest neighbor model. In general the predictions from the nearest neighbor method agree reasonably well with experimental results, but some unexpected outlying sequences, calling for further insights, do exist. Finally, we should also mention the increased accuracy provided by single molecule unzipping assays which provide a wealth of new insight into the thermodynamics of DNA hybridization and the validity of the nearest-neighbour model as well.
See also
Melting point
Primer (molecular biology) for calculations of Tm
Base pair
Complementary DNA
Western blot
References
External links
Tm calculations in OligoAnalyzer – Integrated DNA Technologies
DNA thermodynamics calculations – Tm, melting profile, mismatches, free energy calculations
Tm calculation – by bioPHP.org.
https://web.archive.org/web/20080516194508/http://www.promega.com/biomath/calc11.htm#disc
Invitrogen Tm calculation
AnnHyb Open Source software for Tm calculation using the Nearest-neighbour method
Sigma-aldrich technical notes
Primer3 calculation
"Discovery of the Hybrid Helix and the First DNA-RNA Hybridization" by Alexander Rich
uMelt: Melting Curve Prediction
Tm Tool
Nearest Neighbor Database: Provides a description of RNA-RNA interaction nearest neighbor parameters and examples of their use.
DNA
Nucleic acids
Molecular biology
Biotechnology
Biochemical engineering
de:Desoxyribonukleinsäure#Schmelzpunkt | Nucleic acid thermodynamics | [
"Chemistry",
"Engineering",
"Biology"
] | 2,881 | [
"Biomolecules by chemical classification",
"Biological engineering",
"Chemical engineering",
"Biochemical engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry",
"Nucleic acids"
] |
9,299,479 | https://en.wikipedia.org/wiki/Third-party%20evidence%20for%20Apollo%20Moon%20landings | Third-party evidence for Apollo Moon landings is evidence, or analysis of evidence, about the Moon landings that does not come from either NASA or the U.S. government (the first party), or the Apollo Moon landing hoax theorists (the second party). This evidence provides independent confirmation of NASA's account of the six Apollo program Moon missions flown between 1969 and 1972.
Independent evidence
In this section are only those observations that are completely independent of NASA—no NASA facilities were used, and there was no NASA funding. Each of the countries mentioned in this section (Soviet Union, Japan, China, and India) has its own space program, builds its own space probes which are launched on their own launch vehicles, and has its own deep space communication network.
SELENE photographs
In 2008, the Japan Aerospace Exploration Agency (JAXA) SELENE lunar probe obtained several photographs showing evidence of Moon landings. On the left are two photographs taken on the lunar surface by astronauts on August 2, 1971 during the third Apollo 15 moonwalk at station 9A near Hadley Rille. On the right is a 2008 reconstruction from images taken by the SELENE terrain camera and 3D projected to the same vantage point as the surface photos. The terrain is a close match within the SELENE camera resolution of 10 metres.
The light-colored area of blown lunar surface dust created by the lunar module engine blast at the Apollo 15 landing site was photographed and confirmed by comparative analysis of photographs in May 2008. They correspond well to photographs taken from the Apollo 15 Command/Service Module showing a change in surface reflectivity due to the plume. This was the first visible trace of crewed landings on the Moon seen from space since the close of the Apollo program.
Chandrayaan-1
As with SELENE, the Terrain Mapping Camera of India's Chandrayaan-1 probe did not have enough resolution to record Apollo hardware. Nevertheless, as with SELENE, Chandrayaan-1 independently recorded evidence of lighter, disturbed soil around the Apollo 15 site.
Chandrayaan-2
In April 2021 the ISRO Chandrayaan-2 orbiter captured an image of the Apollo 11 Lunar Module Eagle descent stage. The orbiter's image of Tranquility Base, the Apollo 11 landing site, was released to the public in a presentation on September 3, 2021.
Chang'e 2
China's second lunar probe, Chang'e 2, which was launched in 2010 is capable of capturing lunar surface images with a resolution of up to 1.3 metres. It claims to have spotted traces of the Apollo landings and the lunar Rover, though the relevant imagery has not been publicly identified.
Danuri
South Korea's lunar probe, Danuri, which was launched in 2022 is capable of capturing lunar surface images. It imaged both Apollo 11 and Apollo 17 landing sites in 2023, with a good enough resolution to spot the landers.
Apollo missions tracked by independent parties
Aside from NASA, a number of entities and individuals observed, through various means, the Apollo missions as they took place. On later missions, NASA released information to the public explaining where third party observers could expect to see the various craft at specific times according to scheduled launch times and planned trajectories.
Observers of all missions
The Soviet Union monitored the missions at their Space Transmissions Corps, which was "fully equipped with the latest intelligence-gathering and surveillance equipment". Vasily Mishin, in an interview for the article "The Moon Programme That Faltered", describes how the Soviet Moon programme dwindled after the Apollo landing.
The missions were tracked by radar from several countries on the way to the Moon and back.
Kettering Grammar School
A group at Kettering Grammar School, using simple radio equipment, monitored Soviet and U.S. spacecraft and calculated their orbits. According to the group, in December 1972 a member "picks up Apollo 17 on its way to the Moon".
Apollo 8
Apollo 8 was the first crewed mission to orbit the Moon, but did not land.
On December 21, 1968, at 18:00 UT, amateur astronomers (H. R. Hatfield, M. J. Hendrie, F. Kent, Alan Heath, and M. J. Oates) in the UK photographed a fuel dump from the jettisoned S-IVB third rocket stage.
Pic du Midi Observatory (in the French Pyrenees); the Catalina Station of the Lunar and Planetary Laboratory (University of Arizona); Corralitos Observatory, New Mexico, then operated by Northwestern University; McDonald Observatory of the University of Texas; and Lick Observatory of the University of California all filed reports of observations.
Dr. Michael Moutsoulas at Pic du Midi Observatory reported an initial sighting around 17:10 UT on December 21 with the 1.1-metre reflector as an object (magnitude near 10, through clouds) moving eastward near the predicted location of Apollo 8. He used a 60 cm refractor telescope to observe a cluster of objects which were obscured by the appearance of a nebulous cloud at a time which matches a firing of the service module engine to assure adequate separation from the S-IVB. This event can be traced with the Apollo 8 Flight Journal, noting that launch was at 0751 EST or 12:51 UT on December 21.
Justus Dunlap and others at Corralitos Observatory (then operated by Northwestern University) obtained over 400 short-exposure intensified images, giving very accurate locations for the spacecraft.
The 2.1 m Otto Struve Telescope at McDonald Observatory, from 01:50 to 2:37 UT on December 23, observed the brightest object flashing as bright as magnitude 15, with the flash pattern recurring about once a minute.
The Lick Observatory observations during the return coast to Earth produced live television pictures broadcast to United States west coast viewers via KQED-TV in San Francisco.
An article in the March 1969 issue of Sky & Telescope contained many reports of optical tracking of Apollo 8.
The first post-launch sightings were from the Smithsonian Astrophysical Observatory (SAO) station on Maui. Many in Hawaii observed the trans-lunar injection burn near 15:44 UT on December 21.
Apollo 10
Like Apollo 8, Apollo 10 orbited the Moon but did not land.
A list of sightings of Apollo 10 were reported in "Apollo 10 Optical Tracking" by Sky & Telescope magazine, July 1969, pp. 62–63.
During the Apollo 10 mission The Corralitos Observatory was linked with the CBS news network. Images of the spacecraft going to the Moon were broadcast live.
Apollo 11
The Bochum Observatory director (Professor Heinz Kaminski) was able to provide confirmation of events and data independent of both the Russian and U.S. space agencies.
A compilation of sightings appeared in "Observations of Apollo 11" by Sky and Telescope magazine, November 1969.
At Jodrell Bank Observatory in the UK, the telescope was used to observe the mission, as it was used years previously for Sputnik. At the same time, Jodrell Bank scientists were tracking the uncrewed Soviet spacecraft Luna 15, which was trying to land on the Moon. In July 2009, Jodrell released some recordings they made.
Larry Baysinger, a radio amateur (W4EJA) and a technician for WHAS radio in Louisville, Kentucky, independently detected and recorded transmissions between the Apollo 11 astronauts on the lunar surface and the Lunar Module. Recordings made by Baysinger share certain characteristics with recordings made at Bochum Observatory by Kaminski, in that both Kaminski's and Baysinger's recordings do not include the Capsule Communicator (CAPCOM) in Houston, Texas, and the associated Quindar tones heard in NASA audio and seen on NASA Apollo 11 transcripts. Kaminski and Baysinger could only hear the transmissions from the Moon, and not transmissions to the Moon from the Earth.
The Arcetri Observatory near Florence, Italy, also detected transmissions coming from the mission using a 10 meters dish.
Apollo 12
Paul Maley reports several sightings of the Apollo 12 Command Module.
Sky and Telescope magazine published reports of the optical sighting of this mission.
Apollo 13
Apollo 13 was intended to land on the Moon, but an oxygen tank explosion resulted in the mission being aborted after trans-lunar injection. It flew by the Moon but did not orbit or land.
Chabot Observatory calendar records an application of optical tracking during the final phases of Apollo 13, on April 17, 1970:
Apollo 14
Corralitos Observatory photographed Apollo 14.
Sky and Telescope magazine published reports of the optical sighting of this mission.
Apollo 15
Paul Wilson and Richard T. Knadle, Jr. received voice transmissions from the Command/Service Module in lunar orbit on the morning of August 1, 1971. In an article for QST magazine they provide a detailed description of their work, with photographs.
Apollo 16
Jewett Observatory at Washington State University reported sightings of Apollo 16.
At least two different radio amateurs, W4HHK and K2RIW, reported reception of Apollo 16 signals with home-built equipment.
Bochum Observatory tracked the astronauts and intercepted the television signals from Apollo 16. The image was re-recorded in black and white in the 625 lines, 25 frames/s television standard onto 2-inch videotape using their sole quad machine. The transmissions are only of the astronauts and do not contain any voice from Houston, as the signal received came from the Moon only. The videotapes are held in storage at the observatory.
Apollo 17
Sven Grahn of the Swedish space program has described several amateur sightings of Apollo 17.
Independent research consistent with NASA claims
In this section is evidence, by independent researchers, that NASA's account is correct. However, at least somewhere in the investigation, there was some NASA involvement, or use of US government resources.
Existence and age of Moon rocks
A total of of Moon rocks and dust were collected during the Apollo 11, 12, 14, 15, 16 and 17 missions. Some of the Moon rocks have been used in hundreds of experiments performed by both NASA researchers and planetary scientists at research institutions unaffiliated with NASA. These experiments have confirmed the age and origin of the rocks as lunar, and were used to identify lunar meteorites collected later from Antarctica. The oldest Moon rocks are up to 4.5 billion years old, making them 200 million years older than the oldest Earth rocks, which are from the Hadean eon and dated 3.8 to 4.3 billion years ago. The rocks returned by Apollo are very close in composition to the samples returned by the independent Soviet Luna programme. A rock brought back by Apollo 17 was dated to be 4.417 billion years old, with a margin of error of plus or minus 6 million years. The test was done by a group of researchers headed by Alexander Nemchin at Curtin University of Technology in Bentley, Australia.
Retroreflectors
The detection on Earth of reflections from laser ranging retro-reflectors (LRRRs, or arrays of corner-cube prisms used as targets for Earth-based tracking lasers) on Lunar Laser Ranging experiments left on the Moon is evidence of landings.
Quoting from James Hansen's 2005 biography of Neil Armstrong, First Man: The Life of Neil A. Armstrong:
For those few misguided souls who still cling to the belief that the Moon landings never happened, examination of the results of five decades of LRRR experiments should evidence how delusional their rejection of the Moon landing really is.
The NASA-independent Observatoire de la Côte d'Azur, McDonald, Apache Point, and Haleakalā observatories regularly use the Apollo LRRR. Lick Observatory attempted to detect from Apollo 11's retroreflector while Armstrong and Aldrin were still on the Moon but did not succeed until August 1, 1969. The Apollo 14 astronauts deployed a retroreflector on February 5, 1971, and McDonald Observatory detected it the same day. The Apollo 15 retroreflector was deployed on July 31, 1971, and was detected by McDonald Observatory within a few days.
The image on the left shows what is considered some of the most unambiguous evidence. This experiment repeatedly fires a laser at the Moon, at the spots where the Apollo landings were reported. The dots show when photons are received from the Moon. The dark line shows that a large number come back at a specific time, and hence were reflected by something quite small (well under a metre in size). Photons reflected from the surface come back over a much broader range of times (the whole vertical range of the plot corresponds to only 18 metres or so in range). The concentration of photons at a specific time appears when the laser is aimed at the Apollos 11, 14 or 15 landing sites; otherwise the expected featureless distribution is observed. The Apollo reflectors are still in use.
Strictly speaking, although retroreflectors left by Apollo astronauts are strong evidence that human-manufactured artifacts currently exist on the Moon and that human visitors left them there, they are not, on their own, conclusive evidence. Uncrewed missions are known to have placed such objects on the Moon, albeit not before 1970. Smaller retroreflectors were carried by the uncrewed landers Lunokhod 1 and Lunokhod 2 in 1970 and 1973, respectively. The location of Lunokhod 1 was unknown for nearly 40 years but it was rediscovered in 2010 in photographs by the Lunar Reconnaissance Orbiter (LRO) and its retroreflector is now in use. Both the United States and the USSR had the capability to soft-land objects on the surface of the Moon for several years before that. The USSR successfully landed its first uncrewed probe (Luna 9) on the Moon in February 1966, and the United States followed with Surveyor 1 in June 1966, but no uncrewed landers carried retroreflectors before Lunokhod 1 in November 1970. The retroreflectors are proof that human-made probes reached the exact locations of the Apollo 11, 14, and 15 landing sites at exactly the same time as those missions.
Radio-telescopic observations
In October–November 1977, the Soviet radio telescope RATAN-600 observed all five transmitters of ALSEP scientific packages placed on the Moon surface by all Apollo landing missions excluding Apollo 11. Their selenographic coordinates and the transmitter power outputs (20 W) were in agreement with the NASA reports.
Photographs
Ground-based telescopes
In 2002, astronomers tested the optics of the Very Large Telescope by imaging the Apollo landing sites. The telescope was used to image the Moon and provided a resolution of , which was not good enough to resolve the wide lunar landers or their long shadows.
New lunar missions
Post-Apollo lunar exploration missions have located and imaged artifacts of the Apollo program remaining on the Moon's surface.
Images taken by the Lunar Reconnaissance Orbiter mission beginning in July 2009 show the six Apollo Lunar Module descent stages, Apollo Lunar Surface Experiments Package (ALSEP) science experiments, astronaut footpaths, and lunar rover tire tracks. These images are the most effective proof to date to rebut the "landing hoax" theories. Although this probe was indeed launched by NASA, the camera and the interpretation of the images are under the control of an academic group — the LROC Science Operations Center at Arizona State University, along with many other academic groups. At least some of these groups, such as German Aerospace Center, Berlin, are not located in the US, and are not funded by the US government.
After the images shown here were taken, the LRO mission moved into a lower orbit for higher resolution camera work. All of the sites have since been re-imaged at higher resolution.
Comparison of the original 16 mm Apollo 17 LM camera footage during ascent to the 2011 LRO photos of the landing site show an almost exact match of the rover tracks.
Further imaging in 2012 shows the shadows cast by the flags planted by the astronauts on all Apollo landing sites. The exception is that of Apollo 11, which matches Buzz Aldrin's account of the flag being blown over by the lander's rocket exhaust on leaving the Moon.
Ultraviolet photographs
Long-exposure photos were taken with the Far Ultraviolet Camera/Spectrograph by Apollo 16 on April 21, 1972, from the surface of the Moon. Some of these photos show the Earth with stars from the Capricornus and Aquarius constellations in the background. The European Space Research Organisation's TD-1A satellite later scanned the sky for stars that are bright in ultraviolet light. The TD-1A data obtained with the shortest passband is a close match for the Apollo 16 photographs.
Apollo missions tracked by non-NASA personnel
This section contains reports of the lunar missions from facilities that had significant numbers of non-NASA employees. This includes facilities such as the Deep Space Network, which employed (and still employs) many local citizens in Spain and Australia, and facilities such as the Parkes Observatory, which were hired by NASA for specific tasks, but staffed by non-NASA personnel.
Observers of all missions
The NASA Manned Space Flight Network (MSFN) was a worldwide network of stations that tracked the Mercury, Gemini, Apollo and Skylab missions. Most MSFN stations were only needed during the launch, Earth orbit and landing phases of the lunar missions, but three "deep space" sites with larger antennas provided continuous coverage during the trans-lunar, trans-Earth and lunar mission phases. Today, these three sites form the NASA Deep Space Network: the Goldstone Deep Space Communications Complex near Goldstone, California; the Madrid Deep Space Communication Complex near Madrid, Spain; and the Canberra Deep Space Communication Complex, adjacent to the Tidbinbilla Nature Reserve, near Canberra, Australia.
Although most MSFN stations were NASA-owned, they employed many local citizens. NASA also contracted the Parkes Observatory in New South Wales, Australia, to supplement the three deep space sites, most famously during the Apollo 11 moonwalk as documented by radio astronomer John Sarkissian and portrayed (humorously and not quite accurately) in the 2000 film The Dish. The Parkes Observatory is not NASA-owned; it is, and always has been, owned and operated by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), a research agency of the Australian government. It would have been relatively easy for NASA to avoid using the Parkes Observatory to receive the Apollo 11 lunar surface television signals by scheduling the moonwalk at an earlier time when the Goldstone station could provide complete coverage.
Apollo 11
The Madrid Apollo Station, now part of the Deep Space Network, built in Fresnedillas, near Madrid, Spain, tracked Apollo 11. A large majority of the people working at this station were not employees of NASA, but of Spain's Instituto Nacional de Técnica Aeroespacial.
Apollo 12
Parts of Surveyor 3, which landed on the Moon in April 1967, were brought back to Earth by Apollo 12 in November 1969. These samples were shown to have been exposed to lunar conditions.
See also
Apollo program
Examination of Apollo Moon photographs
Moon landing
Moon rock
Citations
References
External links
"Relative Signal Strengths Using Lunar Retroreflectors" shows calculations of relative signal gain when using retroreflectors for Lunar ranging.
"Telescopic Tracking of the Apollo Lunar Missions" at Bill Keel's Space History Bits
"Bezos Expeditions recovers pieces of Apollo 11 rockets" by Jay Greene for CNET (March 20, 2013), contradicting Bill Kaysing's published claim that genuine Rocketdyne F-1 engines were not used.
Observational astronomy
Moon landing conspiracy theories
Apollo program | Third-party evidence for Apollo Moon landings | [
"Astronomy"
] | 4,001 | [
"Observational astronomy",
"Astronomical sub-disciplines"
] |
9,299,508 | https://en.wikipedia.org/wiki/Conidiobolomycosis | Conidiobolomycosis is a rare long-term fungal infection that is typically found just under the skin of the nose, sinuses, cheeks and upper lips. It may present with a nose bleed or a blocked or runny nose. Typically there is a firm painless swelling which can slowly extend to the nasal bridge and eyes, sometimes causing facial disfigurement.
Most cases are caused by Conidiobolus coronatus, a fungus found in soil and in the environment in general, which can infect healthy people. It is usually acquired by inhaling the spores of the fungus, but can be by direct infection through a cut in the skin such as an insect bite.
The extent of disease may be seen using medical imaging such as CT scanning of the nose and sinus. Diagnosis may be confirmed by biopsy, microscopy, culture and histopathology. Treatment is with long courses of antifungals and sometimes cutting out infected tissue. The condition has a good response to antifungal treatment, but can recur. The infection is rarely fatal.
The condition occurs more frequently in adults working or living in the tropical forests of South and Central America, West Africa and Southeast Asia. Males are affected more than females. The first case in a human was described in Jamaica in 1965.
Signs and symptoms
The infection presents with firm lumps just under the skin of the nose, sinuses, upper lips, mouth and cheeks. The swelling is painless and may feel "woody". Sinus pain may occur. Infection may extend to involve the nasal bridge, face and eyes, sometimes resulting in facial disfigurement. The nose may feel blocked or have a discharge, and may bleed.
Cause
Conidiobolomycosis is a type of Entomophthoromycosis, the other being basidiobolomycosis, and is caused by mainly Conidiobolus coronatus, but also Conidiobolus incongruus and Conidiobolus lamprauges
Mechanism
Conidiobolomycosis chiefly affects the central face, usually beginning in the nose before extending onto paranasal sinuses, cheeks, upper lip and pharynx. The disease is acquired usually by breathing in the spores of the fungus, which then infect the tissue of the nose and paranasal sinuses, from where it slowly spreads. It can attach to underlying tissues, but not bone. It can be acquired by direct infection through a small cut in the skin such as an insect bite. Thrombosis, infarction of tissue and spread into blood vessels does not occur. Deep and systemic infection is possible in people with a weakened immune system. Infection causes a local chronic granulomatous reaction.
Diagnosis
The condition is typically diagnosed after noticing facial changes. The extent of disease may be seen using medical imaging such as CT scanning of the nose and sinus. Diagnosis can be confirmed by biopsy, microscopy, and culture. Histology reveals wide but thin-walled fungal filaments with branching at right-angles. There are only a few septae. The fungus is fragile and hence rarely isolated. An immunoallergic reaction might be observed, where a local antigen–antibody reaction causes eosinophils and hyaline material to surround the organism. Molecular methods may also be used to identify the fungus.
Differential diagnosis
Differential diagnosis includes soft tissue tumors. Other conditions that may appear similar include mucormycosis, cellulitis, rhinoscleroma and lymphoma.
Treatment
Treatment is with long courses of antifungals and sometimes cutting out infected tissue. Generally, treatment is with triazoles, preferably itraconazole. A second choice is potassium iodide, either alone or combined with itraconazole. In severe widespread disease, amphotericin B may be an option. The condition has a good response to antifungal treatment, but can recur. The infection is rarely fatal but often disfiguring.
Epidemiology
The disease is rare, occurring mainly in those working or living in the tropical forests of West Africa, Southeast Asia, South and Central America, as well India, Saudi Arabia and Oman. Conidiobolus species have been found in areas of high humidity such as the coasts of the United Kingdom, eastern United States and West Africa.
Adults are affected more than children. Males are affected more than females.
History
The condition was first reported in 1961 in horses in Texas. The first case in a human was described in 1965 in Jamaica. Previously this genus was thought to only infect insects.
Other animals
Conidiobolomycosis affects spiders, termites and other arthropods. The condition has been described in dogs, horses, sheep and other mammals. Affected mammals typically present with irregular lumps in one or both nostrils that cause obstruction, bloody nasal discharge and noisy abnormal breathing.
References
External links
Animal fungal diseases
Fungal diseases | Conidiobolomycosis | [
"Biology"
] | 1,016 | [
"Fungi",
"Fungal diseases"
] |
9,300,583 | https://en.wikipedia.org/wiki/Wax%20thermostatic%20element | The wax thermostatic element was invented in 1934 by Sergius Vernet (1899–1968). Its principal application is in automotive thermostats used in the engine cooling system. The first applications in the plumbing and heating industries were in Sweden (1970) and in Switzerland (1971).
Wax thermostatic elements transform heat energy into mechanical energy using the thermal expansion of waxes when they melt. This wax motor principle also finds applications besides engine cooling systems, including heating system thermostatic radiator valves, plumbing, industrial, and agriculture.
Automotive thermostats
The internal combustion engine cooling thermostat maintains the temperature of the engine near its optimum operating temperature by regulating the flow of coolant to an air cooled radiator. This regulation is now carried out by an internal thermostat. Conveniently, both the sensing element of the thermostat and its control valve may be placed at the same location, allowing the use of a simple self-contained non-powered thermostat as the primary device for the precise control of engine temperature. Although most vehicles now have a temperature-controlled electric cooling fan, "the unassisted air stream can provide sufficient cooling up to 95% of the time" and so such a fan is not the mechanism for primary control of the internal temperature.
Research in the 1920s showed that cylinder wear was aggravated by condensation of fuel when it contacted a cool cylinder wall which removed the oil film. The development of the automatic thermostat in the 1930s solved this problem by ensuring fast engine warm-up.
The first thermostats used a sealed capsule of an organic liquid with a boiling point just below the desired opening temperature. These capsules were made in the form of a cylindrical bellows. As the liquid boiled inside the capsule, the capsule bellows expanded, opening a sheet brass plug valve within the thermostat. As these thermostats could fail in service, they were designed for easy replacement during servicing, usually by being mounted under the water outlet fitting at the top of the cylinder block. Conveniently this was also the hottest accessible part of the cooling circuit, giving a fast response when warming up.
Cooling circuits have a small bypass path even when the thermostat is closed, usually by a small hole in the thermostat. This allows enough flow of cooling water to heat the thermostat when warming up. It also provided an escape route for trapped air when first filling the system. A larger bypass is often provided, through the cylinder block and water pump, so as to keep the rising temperature distribution even.
Work on cooling high-performance aircraft engines in the 1930s led to the adoption of pressurised cooling systems, which became common on post-war cars. As the boiling point of water increases with increasing pressure, these pressurised systems could run at a higher temperature without boiling. This increased both the working temperature of the engine, thus its efficiency, and also the heat capacity of the coolant by volume, allowing smaller cooling systems that required less pump power. A drawback to the bellows thermostat was that it was also sensitive to pressure changes, thus could sometimes be forced shut again by pressure, leading to overheating. The later wax pellet type has a negligible change in its external volume, thus is insensitive to pressure changes. It is otherwise identical in operation to the earlier type. Many cars of the 1950s, or earlier, that were originally built with bellows thermostats were later serviced with replacement wax capsule thermostats, without requiring any change or adaption.
This most common modern form of thermostat now uses a wax pellet inside a sealed chamber. Rather than a liquid-vapour transition, these use a solid-liquid transition, which for waxes is accompanied by a large increase in volume. The wax is solid at low temperatures, and as the engine heats up, the wax melts and expands. The sealed chamber operates a rod which opens a valve when the operating temperature is exceeded. The operating temperature is fixed, but is determined by the specific composition of the wax, so thermostats of this type are available to maintain different temperatures, typically in the range of 70 to 90°C (160 to 200°F). Modern engines run hot, that is, over 80 °C (180 °F), in order to run more efficiently and to reduce the emission of pollutants.
While the thermostat is closed, there is no flow of coolant in the radiator loop, and coolant water is instead redirected through the engine, allowing it to warm up rapidly while also avoiding hot spots. The thermostat stays closed until the coolant temperature reaches the nominal thermostat opening temperature. The thermostat then progressively opens as the coolant temperature increases to the optimum operating temperature, increasing the coolant flow to the radiator. Once the optimum operating temperature is reached, the thermostat progressively increases or decreases its opening in response to temperature changes, dynamically balancing the coolant recirculation flow and coolant flow to the radiator to maintain the engine temperature in the optimum range as engine heat output, vehicle speed, and outside ambient temperature change. Under normal operating conditions the thermostat is open to about half of its stroke travel, so that it can open further or reduce its opening to react to changes in operating conditions. A correctly designed thermostat will never be fully open or fully closed while the engine is operating normally, or overheating or overcooling would occur.
Engines which require a tighter control of temperature, as they are sensitive to "Thermal shock" caused by surges of coolant, may use a "constant inlet temperature" system. In this arrangement the inlet cooling to the engine is controlled by double-valve thermostat which mixes a re-circulating sensing flow with the radiator cooling flow. These employ a single capsule, but have two valve discs. Thus a very compact, and simple but effective, control function is achieved.
The double-valve thermostat may also regulate the flow of coolant to the carburettor: as long as the temperature of the coolant is relative low, the carburettor will be warmed up, so further speeding up the warming up of the engine.
The wax used within the thermostat is specially manufactured for the purpose. Unlike a standard paraffin wax, which has a relatively wide range of carbon chain lengths, a wax used in the thermostat application has a very narrow range of carbon molecule chains. The extent of the chains is usually determined by the melting characteristics demanded by the specific end application. To manufacture a product in this manner requires very precise levels of distillation.
Types of elements
Flat diaphragm element
The temperature sensing material contained in the cup transfers pressure to the piston by means of the diaphragm and the plug, held tightly in position by the guide. On cooling, the initial position of the piston is obtained by means of a return spring.
Flat diaphragm elements are particularly noted for their high level of accuracy, and therefore mainly used in sanitary installations and heating.
Squeeze-push elements
Squeeze-Push elements contain a synthetic rubber sleeve-like component shaped like the 'finger of a glove' which surrounds the piston. As the temperature increases, pressure from the expansion of the thermostatic material moves the piston with a lateral squeeze and a vertical push. As with the flat diaphragm element, the piston returns to its initial position by means of a return spring. These elements are slightly less accurate but provide a longer stroke.
Properties
The stroke is the movement of the piston in relation to its starting point. The ideal stroke corresponds to the temperature range of the elements. According to the type of element, it can vary from 1.5 mm to 16 mm.
The temperature range lies between the minimum and maximum operating temperature of the element. Elements can cover temperatures ranging from -15 °C to +120 °C. Elements may move in proportion to the temperature change over some part of the range, or may open suddenly around a particular temperature depending on the composition of the waxes.
Hysteresis is the difference noted between the upstroke and down stroke curve on heating and cooling of the element. Hysteresis is caused by the thermal inertia of the element and by the friction between the parts in motion.
See also
Thermostatic radiator valve
Thermostatic mixing valve
References
External links
Vernatherm - Thermal Actuators - and other Thermostatic Fluid Controls - Rostra Vernatherm
ThermalActuators.com - Thermal Actuators - & Mechanical Function Information & Products - Thermal Actuators
Vernet.fr - Thermostatic Elements Cartridges Thermostats Electrothermic Actuator
Ysnews.com - Vernet founded leading Yellow Springs company
Vernay - 1946 Vernet founded Vernay Laboratories
thermal-actuators.com - Automotive | TU-POLY
Transducers
Auto parts
Temperature control | Wax thermostatic element | [
"Technology"
] | 1,878 | [
"Home automation",
"Temperature control"
] |
2,165,486 | https://en.wikipedia.org/wiki/Rosser%27s%20theorem | In number theory, Rosser's theorem states that the th prime number is greater than , where is the natural logarithm function. It was published by J. Barkley Rosser in 1939.
Its full statement is:
Let be the th prime number. Then for
In 1999, Pierre Dusart proved a tighter lower bound:
See also
Prime number theorem
References
External links
Rosser's theorem article on Wolfram Mathworld.
Theorems about prime numbers
de:John Barkley Rosser#Satz von Rosser | Rosser's theorem | [
"Mathematics"
] | 109 | [
"Theorems in number theory",
"Theorems about prime numbers"
] |
2,165,494 | https://en.wikipedia.org/wiki/King%27s%20American%20Dispensatory | King's American Dispensatory is a book first published in 1854 that covers the uses of herbs used in American medical practice, especially by those involved in eclectic medicine, which was the botanical school of medicine in the 19th to 20th centuries. In 1880 John Uri Lloyd, an eclectic pharmacist of the late 19th and early 20th centuries, promised his friend, professor John King, to revise the pharmaceutical and chemical sections of the American Dispensatory. Eighteen years later an entirely rewritten eighteenth edition (third revision) was published in 1898. It was co-authored by eclectic physician Harvey Wickes Felter
External links
Kings American Dispensatory, 1905 edition at the Internet Archive
1854 books
1898 books
Eclectic medicine
Health and wellness books
Herbalism
Pharmacology literature | King's American Dispensatory | [
"Chemistry"
] | 161 | [
"Pharmacology",
"Pharmacology literature"
] |
2,165,654 | https://en.wikipedia.org/wiki/Gel%20extraction | In molecular biology, gel extraction or gel isolation is a technique used to isolate a desired fragment of intact DNA from an agarose gel following agarose gel electrophoresis. After extraction, fragments of interest can be mixed, precipitated, and enzymatically ligated together in several simple steps. This process, usually performed on plasmids, is the basis for rudimentary genetic engineering.
After DNA samples are run on an agarose gel, extraction involves four basic steps: identifying the fragments of interest, isolating the corresponding bands, isolating the DNA from those bands, and removing the accompanying salts and stain.
To begin, UV light is shone on the gel in order to illuminate all the ethidium bromide-stained DNA. Care must be taken to avoid exposing the DNA to mutagenic radiation for longer than absolutely necessary. The desired band is identified and physically removed with a cover slip or razor blade. The removed slice of gel should contain the desired DNA inside. An alternative method, utilizing SYBR Safe DNA gel stain and blue-light illumination, avoids the DNA damage associated with ethidium bromide and UV light.
Several strategies for isolating and cleaning the DNA fragment of interest exist.
Spin Column Extraction
Gel extraction kits are available from several major biotech manufacturers for a final cost of approximately 1–2 US$ per sample.
Protocols included in these kits generally call for the dissolution of the gel-slice in 3 volumes of chaotropic agent at 50 °C, followed by application of the solution to a spin-column (the DNA remains in the column), a 70% ethanol wash (the DNA remains in the column, salt and impurities are washed out), and elution of the DNA in a small volume (30 μL) of water or buffer.
Dialysis
The gel fragment is placed in a dialysis tube that is permeable to fluids but impermeable to molecules at the size of DNA, thus preventing the DNA from passing through the membrane when soaked in TE buffer. An electric field is established around the tubing (in a way similar to gel electrophoresis) long enough so that the DNA is removed from the gel but remains in the tube. The tube solution can then be pipetted out and will contain the desired DNA with minimal background.
Traditional
The traditional method of gel extraction involves creating a folded pocket of Parafilm wax paper and placing the agarose fragment inside. The agarose is physically compressed with a finger into a corner of the pocket, partially liquifying the gel and its contents. The liquid droplets can then be directed out of the pocket onto an exterior piece of Parafilm, where they are pipetted into a small tube. A butanol extraction removes the ethidium bromide stain, followed by a phenol/chloroform extraction of the cleaned DNA fragment.
The disadvantage of gel isolation is that background can only be removed if it can be physically identified using the UV light. If two bands are very close together, it can be hard to separate them without some contamination. In order to clearly identify the band of interest, further restriction digests may be necessary. Restriction sites unique to unwanted bands of similar size can aid in breaking up these potential contaminants.
References
Molecular biology
Laboratory techniques | Gel extraction | [
"Chemistry",
"Biology"
] | 677 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
2,165,732 | https://en.wikipedia.org/wiki/Snag%20%28ecology%29 | In forest ecology, a snag refers to a standing dead or dying tree, often missing a top or most of the smaller branches. In freshwater ecology it refers to trees, branches, and other pieces of naturally occurring wood found sunken in rivers and streams; it is also known as coarse woody debris. Snags provide habitat for a wide variety of wildlife but pose hazards to river navigation. When used in manufacturing, especially in Scandinavia, they are often called dead wood and in Finland, kelo wood.
Forest snags
Snags are an important structural component in forest communities, making up 10–20% of all trees present in old-growth tropical, temperate, and boreal forests. Snags and downed coarse woody debris represent a large portion of the woody biomass in a healthy forest.
In temperate forests, snags provide critical habitat for more than 100 species of bird and mammal, and snags are often called 'wildlife trees' by foresters. Dead, decaying wood supports a rich community of decomposers like bacteria and fungi, insects, and other invertebrates. These organisms and their consumers, along with the structural complexity of cavities, hollows, and broken tops make snags important habitat for birds, bats, and small mammals, which in turn feed larger mammalian predators.
Snags are optimal habitat for primary cavity nesters such as woodpeckers which create the majority of cavities used by secondary cavity users in forest ecosystems. Woodpeckers excavate cavities for more than 80 other species and the health of their populations relies on snags. Most snag-dependent birds and mammals are insectivorous and represent a major portion of the insectivorous forest fauna, and are important factors in controlling forest insect populations. There are many instances in which birds reduced outbreak populations of forest insects, such as woodpeckers affecting outbreaks of southern hardwood borers and Engelmann spruce beetles.
Snag creation occurs naturally as trees die due to old age, disease, drought, or wildfire. A snag undergoes a series of changes from the time the tree dies until final collapse, and each stage in the decay process has particular value to certain wildlife species. Snag persistence depends on two factors, the size of the stem, and the durability of the wood of the species concerned. The snags of some large conifers, such as Giant Sequoia and Coast Redwood on the Pacific Coast of North America, and the Alerce of Patagonia, can remain intact for 100 years or more, becoming progressively shorter with age, while other snags with rapidly decaying wood, such as aspen and birch, break up and collapse in 2–10 years.
Snag forests, or complex early seral forests, are ecosystems that occupy potentially forested sites after a stand-replacement disturbance and before re-establishment of a closed-forest canopy. They are generated by natural disturbances such as wildfire or insect outbreaks that reset ecological succession processes and follow a pathway that is influenced by biological legacies (e.g., large live trees and snags downed logs, seed banks, resprout tissue, fungi, and other live and dead biomass) that were not removed during the initial disturbance.
Water hunting birds like the osprey or kingfishers can be found near water, perched in a snag tree, or feeding upon their fish catch.
Freshwater snags
In freshwater ecology in Australia and the United States, the term snag is used to refer to the trees, branches and other pieces of naturally occurring wood found in a sunken form in rivers and streams. Such snags have been identified as being critical for shelter and as spawning sites for fish, and are one of the few hard substrates available for biofilm growth supporting aquatic invertebrates in lowland rivers flowing through alluvial flood plains. Snags are important as sites for biofilm growth and for shelter and feeding of aquatic invertebrates in both lowland and upland rivers and streams.
In Australia, the role of freshwater snags has been largely ignored until recently, and more than one million snags have been removed from the Murray-Darling basin. Large tracts of the lowland reaches of the Murray-Darling system are now devoid of the snags that native fish like Murray cod require for shelter and breeding. The damage such wholesale snag removal has caused is enormous but difficult to quantify, however some quantification attempts have been made. Most snags in these systems are river red gum snags. As the dense wood of river red gum is almost impervious to rot it is thought that some of the river red gum snags removed in past decades may have been several thousand years old.
Maritime hazard
Also known as deadheads, partially submerged snags posed hazards to early riverboat navigation and commerce. If hit, snags punctured the wooden hulls used in the 19th century and early 20th century. Snags were, in fact, the most commonly encountered hazard, especially in the early years of steamboat travel. In the United States, the U.S. Army Corps of Engineers operated "snagboats" such as the W. T. Preston in the Puget Sound of Washington State and the Montgomery in the rivers of Alabama to pull out and clear snags. Starting in 1824, there were successful efforts to remove snags from the Mississippi and its tributaries. By 1835, a lieutenant reported to the Chief of Engineers that steamboat travel had become much safer, but by the mid-1840s the appropriations for snag removal dried up and snags re-accumulated until after the Civil War.
Dead wood products
In Scandinavia and Finland, snags, invariably pine trees, known in Finnish as kelo and in Swedish as torraka, are collected for the production of different objects, from furniture to entire log houses. Commercial enterprises market them abroad as "dead wood" or in Finland as "kelo wood". They have been especially prized for their silver-grey weathered surface in the manufacture of vernacular or national romantic products. The suppliers of "dead wood" emphasise its age: the wood has developed with dehydration in the dry coldness of the subarctic zones, the tree having stopped growing after some 300–400 years, and the tree has remained upright for another few hundred years. "Dead wood" logs are easier to transport and handle than normal logs due to their lightness.
See also
Coarse woody debris
Complex early seral forest
Large woody debris
Stream restoration
Tree hollow
References
External links
Ecology terminology
Dead wood
Forest ecology
Habitat
Limnology | Snag (ecology) | [
"Biology"
] | 1,360 | [
"Ecology terminology"
] |
2,165,771 | https://en.wikipedia.org/wiki/Methylthiomethyl%20ether | In organic chemistry a methylthiomethyl (MTM) ether is a protective group for hydroxyl groups. Hydroxyl groups are present in many chemical compounds and they must be protected during oxidation, acylation, halogenation, dehydration and other reactions to which they are susceptible.
Many kinds of protective groups for hydroxyl groups have been developed and used in organic chemistry, but the number of protective groups for tertiary hydroxyl groups, which are susceptible to acid-catalyzed dehydration, is still small because of their poor reactiveness. They can be easily protected with MTM ethers and recovered in good yield.
To introduce an MTM ether to a hydroxyl group, two methods are mainly used. One is a typical Williamson ether synthesis using an MTM halide as an MTM resource and sodium hydride (NaH) as a base. The other is a special method, in which dimethyl sulfoxide (DMSO) and acetic anhydride (Ac2O) are used. In this case, the reaction proceeds with Pummerer rearrangement:
MTM ethers have another advantage. They are removed by neutral (but toxic) mercuric chloride, to which most other ethers are stable. As a result, the selective deprotection of polyfunctional molecules becomes possible using MTM ethers as the protective groups for their hydroxyl groups.
Alcohol protection
Methylthiomethyl (MTM) group is used as a protecting group for alcohols in organic synthesis. This type of alcohol protecting group is robust under mild acidic reaction conditions.
Most common protection methods
Treatment of alcohol with sodium hydride and methylthiomethyl halide
Dimethyl sulfoxide (DMSO) and acetic acid (AcOH) in acetic anhydride (Ac2O) at ambient temperature
Most common deprotection methods
Mercury (II) chloride (HgCl2); calcium carbonate (CaCO3) is used as an acid scavenger for acid sensitive substrates
Iodomethane (MeI) in presence of sodium bicarbonate (NaHCO3) at elevated temperatures (this type of reaction is generally carried out in acetone/H2O solution)
Magnesium iodide (MgI2) and acetic anhydride (Ac2O) in ether at ambient temperature
References
Thioethers
Protecting groups | Methylthiomethyl ether | [
"Chemistry"
] | 505 | [
"Protecting groups",
"Functional groups",
"Reagents for organic chemistry"
] |
2,166,357 | https://en.wikipedia.org/wiki/Caldarium | A (also called a calidarium, cella caldaria or cella coctilium) was a room with a hot plunge bath, used in a Roman bath complex.
The boiler supplying hot water to a baths complex was also called .
This was a very hot and steamy room heated by a hypocaust, an underfloor heating system using tunnels with hot air, heated by a furnace tended by slaves. It was also the hottest room in the regular sequence of bathing rooms; after the caldarium, bathers would progress back through the tepidarium to the frigidarium.
A in both public and private baths followed a common plan which had three main parts. The common arrangement would include a warm-water bath -- usually called , but also referred to as or -- sunk into the floor, a semicircular alcove -- -- where bathers would sit in order to induce sweating, and in the middle of the room a vacant space -- or -- meant for physical exercise before going to sit in .
The bath's patrons would use olive oil to cleanse themselves by applying it to their bodies and using a strigil to remove the excess. This was sometimes left on the floor for the slaves to pick up or put back in the pot for the women to use for their hair.
The temperature of the is not known exactly. However, a floor surface temperature above would have been uncomfortable to stand on with bare feet.
See also
Ancient Roman bathing
References
External links
Greek and Roman baths at the Perseus Project
Ancient Roman baths
Rooms | Caldarium | [
"Engineering"
] | 327 | [
"Rooms",
"Architecture"
] |
2,166,569 | https://en.wikipedia.org/wiki/Shape%20resonance | In quantum mechanics, a shape resonance is a metastable state in which an electron is trapped due to the shape of a potential barrier.
Altunata describes a state as being a shape resonance if, "the internal state of the system remains unchanged upon disintegration of the quasi-bound level."
A more general discussion of resonances and their taxonomies in molecular system can be found in the review article by Schulz; for the discovery of the Fano resonance line-shape and for the Majorana pioneering work in this field by Antonio Bianconi; and for
a mathematical review by Combes et al.
Quantum mechanics
In quantum mechanics, a shape resonance, in contrast to a Feshbach resonance, is a resonance which is not turned into a bound state if the coupling between some degrees of freedom and the degrees of freedom associated to the fragmentation (reaction coordinates) are set to zero. More simply, the shape resonance total energy is more than the separated fragment energy.
Practical implications of this difference for lifetimes and spectral widths are mentioned in works such as Zobel.
Related terms include a special kind of shape resonance, the core-excited shape resonance, and trap-induced shape resonance.
Of course in one-dimensional systems, resonances are shape resonances. In a system with more than one degree of freedom, this definition makes sense only if the separable model, which supposes the two groups of degrees of freedom uncoupled, is a meaningful approximation. When the coupling becomes large, the situation is much less clear.
In the case of atomic and molecular electronic structure problems, it is well known that the self-consistent field (SCF) approximation is relevant at least as a starting point of more elaborate methods. The Slater determinants built from SCF orbitals (atomic or molecular orbitals) are shape resonances if only one electronic transition is required to emit one electron.
Today, there is some debate about the definition and even existence of the shape resonance in some systems observed with molecular spectroscopy. It has been experimentally observed in the anionic yields from photofragmentation of small molecules to provide details of internal structure.
In nuclear physics the concept of "Shape Resonance" is described by Amos de-Shalit and Herman Feshbach in their book.<ref>[https://books.google.com/books?id=A-XvAAAAMAAJ&q=Theoretical-Nuclear-Physics-Structure&dq=Theoretical-Nuclear-Physics-Structure Nuclear Physics: Nuclear structure] Amos de Shalit and Herman Feshbach John Wiley & Sons Inc, New York, page 87 (1974)</ref>
"It is well known that the scattering from a potential shows characteristics peaks, as a function of energy, for such values of E that make the integral number of wave lengths sit within the potential. The resulting shape resonances are rather broad, their width being of the order of ...."
The shape resonances were observed around the years 1949–54 in nuclear scattering experiments. They indicate broad asymmetric peaks in the scattering cross section of neutrons or protons scattered by nuclei. The name "shape resonance" has been introduced to describe the fact that the resonance in the potential scattering for the particle of energy E is controlled by the shape of the nucleus. In fact the shape resonance occurs where the integral number of wavelengths of the particle sit within the potential of the nucleus of radius R. Therefore, the measure of the energies of the shape resonances in the neutron-nucleus scattering have been used in the years from 1947 to 1954 to measure the radii R of the nuclei with the precision of ±1×10−13 cm as it can be seen in the chapter "Elastic Cross Sections" of A Textbook in Nuclear Physics'' by R. D. Evans.
The "shape resonances" are discussed in general introductory academic courses of quantum mechanics in the frame of potential scattering phenomena.
The shape resonances arise from the quantum interference between closed and an open scattering channels. At the resonance energy a quasi bound state is degenerate with a continuum. This quantum interference in many body system has been described using quantum mechanics by Gregor Wentzel, for the interpretation of the Auger effect, by Ettore Majorana for the dissociation processes and quasi bound states, by Ugo Fano for the atomic auto-ionization states in the continuum of helium atomic spectrum and by Victor Frederick Weisskopf. J. M. Blatt and Herman Feshbach for nuclear scattering experiments.
The shape resonances are related with the existence of nearly stable bound states (that is, resonances) of two objects that dramatically influences how those two objects interact when their total energy is near that of the bound state. When the total energy of the objects is close to the energy of the resonance they interact strongly, and their scattering cross-section becomes very large.
A particular type of "shape resonance" occurs in multiband or two-band superconducting heterostructures at atomic limit called superstripes due to quantum interference of a first pairing channel in a first wide band and a second pairing channel in a second band where the chemical potential is tuned near a Lifshitz transition at the band edge or at the topological electronic transitions of the Fermi surface type "neck-collapsing" or "neck-disrupting"
See also
Resonance (particle physics)
Feshbach–Fano partitioning
References
Scattering
Spectroscopy | Shape resonance | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,118 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"Spectroscopy"
] |
2,166,633 | https://en.wikipedia.org/wiki/Core-excited%20shape%20resonance | A core-excited shape resonance is a shape resonance in a system with more than one degree of freedom where, after fragmentation, one of the fragments is in an excited state. It is sometimes very difficult to distinguish a core-excited shape resonance from a Feshbach resonance.
See also
See the definition of Feshbach resonances for more details.
External links
A short FAQ on quantum resonances
Scattering | Core-excited shape resonance | [
"Physics",
"Chemistry",
"Materials_science"
] | 83 | [
"Scattering stubs",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics"
] |
2,166,801 | https://en.wikipedia.org/wiki/Empty%20domain | In first-order logic, the empty domain is the empty set having no members. In traditional and classical logic domains are restrictedly non-empty in order that certain theorems be valid. Interpretations with an empty domain are shown to be a trivial case by a convention originating at least in 1927 with Bernays and Schönfinkel (though possibly earlier) but oft-attributed to Quine's 1951 Mathematical Logic. The convention is to assign any formula beginning with a universal quantifier the value truth, while any formula beginning with an existential quantifier is assigned the value falsehood. This follows from the idea that existentially quantified statements have existential import (i.e. they imply the existence of something) while universally quantified statements do not. This interpretation reportedly stems from George Boole in the late 19th century but this is debatable. In modern model theory, it follows immediately for the truth conditions for quantified sentences:
In other words, an existential quantification of the open formula φ is true in a model iff there is some element in the domain (of the model) that satisfies the formula; i.e. iff that element has the property denoted by the open formula. A universal quantification of an open formula φ is true in a model iff every element in the domain satisfies that formula. (Note that in the metalanguage, "everything that is such that X is such that Y" is interpreted as a universal generalization of the material conditional "if anything is such that X then it is such that Y". Also, the quantifiers are given their usual objectual readings, so that a positive existential statement has existential import, while a universal one does not.) An analogous case concerns the empty conjunction and the empty disjunction. The semantic clauses for, respectively, conjunctions and disjunctions are given by
.
It is easy to see that the empty conjunction is trivially true, and the empty disjunction trivially false.
Logics whose theorems are valid in every, including the empty, domain were first considered by Jaskowski 1934, Mostowski 1951, Hailperin 1953, Quine 1954, Leonard 1956, and Hintikka 1959. While Quine called such logics "inclusive" logic they are now referred to as free logic.
See also
Logical cube
Logical hexagon
Square of opposition
Triangle of opposition
Table of logic symbols
References
Predicate logic | Empty domain | [
"Mathematics"
] | 516 | [
"Mathematical logic",
"Predicate logic",
"Basic concepts in set theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.