text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , a Hermitian symmetric space is a Hermitian manifold which at every point has an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan , they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds .
Every Hermitian symmetric space is a homogeneous space for its isometry group and has a unique decomposition as a product of irreducible spaces and a Euclidean space. The irreducible spaces arise in pairs as a non-compact space that, as Borel showed, can be embedded as an open subspace of its compact dual space. Harish Chandra showed that each non-compact space can be realized as a bounded symmetric domain in a complex vector space. The simplest case involves the groups SU(2), SU(1,1) and their common complexification SL(2, C ). In this case the non-compact space is the unit disk , a homogeneous space for SU(1,1). It is a bounded domain in the complex plane C . The one-point compactification of C , the Riemann sphere , is the dual space, a homogeneous space for SU(2) and SL(2, C ).
Irreducible compact Hermitian symmetric spaces are exactly the homogeneous spaces of simple compact Lie groups by maximal closed connected subgroups which contain a maximal torus and have center isomorphic to the circle group. There is a complete classification of irreducible spaces, with four classical series, studied by Cartan, and two exceptional cases; the classification can be deduced from Borel–de Siebenthal theory , which classifies closed connected subgroups containing a maximal torus. Hermitian symmetric spaces appear in the theory of Jordan triple systems , several complex variables , complex geometry , automorphic forms and group representations , in particular permitting the construction of the holomorphic discrete series representations of semisimple Lie groups. [ 1 ]
Let H be a connected compact semisimple Lie group, σ an automorphism of H of order 2 and H σ the fixed point subgroup of σ. Let K be a closed subgroup of H lying between H σ and its identity component . The compact homogeneous space H / K is called a symmetric space of compact type . The Lie algebra h {\displaystyle {\mathfrak {h}}} admits a decomposition
where k {\displaystyle {\mathfrak {k}}} , the Lie algebra of K , is the +1 eigenspace of σ and m {\displaystyle {\mathfrak {m}}} the –1 eigenspace. If k {\displaystyle {\mathfrak {k}}} contains no simple summand of h {\displaystyle {\mathfrak {h}}} , the pair ( h {\displaystyle {\mathfrak {h}}} , σ) is called an orthogonal symmetric Lie algebra of compact type . [ 2 ]
Any inner product on h {\displaystyle {\mathfrak {h}}} , invariant under the adjoint representation and σ, induces a Riemannian structure on H / K , with H acting by isometries. A canonical example is given by minus the Killing form . Under such an inner product, k {\displaystyle {\mathfrak {k}}} and m {\displaystyle {\mathfrak {m}}} are orthogonal. H / K is then a Riemannian symmetric space of compact type. [ 3 ]
The symmetric space H / K is called a Hermitian symmetric space if it has an almost complex structure preserving the Riemannian metric. This is equivalent to the existence of a linear map J with J 2 = − I on m {\displaystyle {\mathfrak {m}}} which preserves the inner product and commutes with the action of K .
If ( h {\displaystyle {\mathfrak {h}}} ,σ) is Hermitian, K has non-trivial center and the symmetry σ is inner, implemented by an element of the center of K .
In fact J lies in k {\displaystyle {\mathfrak {k}}} and exp tJ forms a one-parameter group in the center of K . This follows because if A , B , C , D lie in m {\displaystyle {\mathfrak {m}}} , then by the invariance of the inner product on h {\displaystyle {\mathfrak {h}}} [ 4 ]
Replacing A and B by JA and JB , it follows that
Define a linear map δ on h {\displaystyle {\mathfrak {h}}} by extending J to be 0 on k {\displaystyle {\mathfrak {k}}} . The last relation shows that δ is a derivation of h {\displaystyle {\mathfrak {h}}} . Since h {\displaystyle {\mathfrak {h}}} is semisimple, δ must be an inner derivation, so that
with T in k {\displaystyle {\mathfrak {k}}} and A in m {\displaystyle {\mathfrak {m}}} . Taking X in k {\displaystyle {\mathfrak {k}}} , it follows that A = 0 and T lies in the center of k {\displaystyle {\mathfrak {k}}} and hence that K is non-semisimple. The symmetry σ is implemented by z = exp π T and the almost complex structure by exp π/2 T . [ 5 ]
The innerness of σ implies that K contains a maximal torus of H , so has maximal rank. On the other hand, the centralizer of the subgroup generated by the torus S of elements exp tT is connected, since if x is any element in K there is a maximal torus containing x and S , which lies in the centralizer. On the other hand, it contains K since S is central in K and is contained in K since z lies in S . So K is the centralizer of S and hence connected. In particular K contains the center of H . [ 2 ]
The symmetric space or the pair ( h {\displaystyle {\mathfrak {h}}} , σ) is said to be irreducible if the adjoint action of k {\displaystyle {\mathfrak {k}}} (or equivalently the identity component of H σ or K ) is irreducible on m {\displaystyle {\mathfrak {m}}} . This is equivalent to the maximality of k {\displaystyle {\mathfrak {k}}} as a subalgebra. [ 6 ]
In fact there is a one-one correspondence between intermediate subalgebras l {\displaystyle {\mathfrak {l}}} and K -invariant subspaces m 1 {\displaystyle {\mathfrak {m}}_{1}} of m {\displaystyle {\mathfrak {m}}} given by
Any orthogonal symmetric algebra ( g {\displaystyle {\mathfrak {g}}} , σ) of Hermitian type can be decomposed as an (orthogonal) direct sum of irreducible orthogonal symmetric algebras of Hermitian type. [ 7 ]
In fact h {\displaystyle {\mathfrak {h}}} can be written as a direct sum of simple algebras
each of which is left invariant by the automorphism σ and the complex structure J , since they are both inner. The eigenspace decomposition of h 1 {\displaystyle {\mathfrak {h}}_{1}} coincides with its intersections with k {\displaystyle {\mathfrak {k}}} and m {\displaystyle {\mathfrak {m}}} . So the restriction of σ to h 1 {\displaystyle {\mathfrak {h}}_{1}} is irreducible.
This decomposition of the orthogonal symmetric Lie algebra yields a direct product decomposition of the corresponding compact symmetric space H / K when H is simply connected. In this case the fixed point subgroup H σ is automatically connected. For simply connected H , the symmetric space H / K is the direct product of H i / K i with H i simply connected and simple. In the irreducible case, K is a maximal connected subgroup of H . Since K acts irreducibly on m {\displaystyle {\mathfrak {m}}} (regarded as a complex space for the complex structure defined by J ), the center of K is a one-dimensional torus T , given by the operators exp tT . Since each H is simply connected and K connected, the quotient H / K is simply connected. [ 8 ]
if H / K is irreducible with K non-semisimple, the compact group H must be simple and K of maximal rank. From Borel-de Siebenthal theory , the involution σ is inner and K is the centralizer of its center, which is isomorphic to T . In particular K is connected. It follows that H / K is simply connected and there is a parabolic subgroup P in the complexification G of H such that H / K = G / P . In particular there is a complex structure on H / K and the action of H is holomorphic. Since any Hermitian symmetric space is a product of irreducible spaces, the same is true in general.
At the Lie algebra level, there is a symmetric decomposition
where ( m , J ) {\displaystyle ({\mathfrak {m}},J)} is a real vector space with a complex structure J , whose complex dimension is given in the table. Correspondingly, there is a graded Lie algebra decomposition
where m ⊗ C = m − ⊕ m + {\displaystyle {\mathfrak {m}}\otimes \mathbb {C} ={\mathfrak {m}}_{-}\oplus {\mathfrak {m}}_{+}} is the decomposition into + i and − i eigenspaces of J and l = k ⊗ C {\displaystyle {\mathfrak {l}}={\mathfrak {k}}\otimes \mathbb {C} } . The Lie algebra of P is the semidirect product m + ⊕ l {\displaystyle {\mathfrak {m}}^{+}\oplus {\mathfrak {l}}} . The complex Lie algebras m ± {\displaystyle {\mathfrak {m}}_{\pm }} are Abelian. Indeed, if U and V lie in m ± {\displaystyle {\mathfrak {m}}_{\pm }} , [ U , V ] = J [ U , V ] = [ JU , JV ] = [± iU ,± iV ] = –[ U , V ], so the Lie bracket must vanish.
The complex subspaces m ± {\displaystyle {\mathfrak {m}}_{\pm }} of m C {\displaystyle {\mathfrak {m}}_{\mathbb {C} }} are irreducible for the action of K , since J commutes with K so that each is isomorphic to m {\displaystyle {\mathfrak {m}}} with complex structure ± J . Equivalently the centre T of K acts on m + {\displaystyle {\mathfrak {m}}_{+}} by the identity representation and on m − {\displaystyle {\mathfrak {m}}_{-}} by its conjugate. [ 9 ]
The realization of H / K as a generalized flag variety G / P is obtained by taking G as in the table (the complexification of H ) and P to be the parabolic subgroup equal to the semidirect product of L , the complexification of K , with the complex Abelian subgroup exp m + {\displaystyle {\mathfrak {m}}_{+}} . (In the language of algebraic groups , L is the Levi factor of P .)
Any Hermitian symmetric space of compact type is simply connected and can be written as a direct product of irreducible hermitian symmetric spaces H i / K i with H i simple, K i connected of maximal rank with center T . The irreducible ones are therefore exactly the non-semisimple cases classified by Borel–de Siebenthal theory . [ 2 ]
Accordingly, the irreducible compact Hermitian symmetric spaces H / K are classified as follows.
In terms of the classification of compact Riemannian symmetric spaces, the Hermitian symmetric spaces are the four infinite series AIII, DIII, CI and BDI with p = 2 or q = 2, and two exceptional spaces, namely EIII and EVII.
The irreducible Hermitian symmetric spaces of compact type are all simply connected. The corresponding symmetry σ of the simply connected simple compact Lie group is inner, given by conjugation by the unique element S in Z ( K ) / Z ( H ) of period 2. For the classical groups, as in the table above, these symmetries are as follows: [ 10 ]
The maximal parabolic subgroup P can be described explicitly in these classical cases. For AIII
in SL( p + q , C ). P ( p , q ) is the stabilizer of a subspace of dimension p in C p + q .
The other groups arise as fixed points of involutions. Let J be the n × n matrix with 1's on the antidiagonal and 0's elsewhere and set
Then Sp( n , C ) is the fixed point subgroup of the involution θ( g ) = A ( g t ) −1 A −1 of SL(2 n , C ). SO( n , C ) can be realised as the fixed points of ψ( g ) = B ( g t ) −1 B −1 in SL( n , C ) where B = J . These involutions leave invariant P ( n , n ) in the cases DIII and CI and P ( p ,2) in the case BDI. The corresponding parabolic subgroups P are obtained by taking the fixed points. The compact group H acts transitively on G / P , so that G / P = H / K .
As with symmetric spaces in general, each compact Hermitian symmetric space H / K has a noncompact dual H * / K obtained by replacing H with the closed real Lie subgroup H * of the complex Lie group G with Lie algebra
Whereas the natural map from H / K to G / P is an isomorphism, the natural map from H * / K to G / P is only an inclusion onto an open subset. This inclusion is called the Borel embedding after Armand Borel . In fact P ∩ H = K = P ∩ H *. The images of H and H * have the same dimension so are open. Since the image of H is compact, so closed, it follows that H / K = G / P . [ 11 ]
The polar decomposition in the complex linear group G implies the Cartan decomposition H * = K ⋅ exp i m {\displaystyle i{\mathfrak {m}}} in H *. [ 12 ]
Moreover, given a maximal Abelian subalgebra a {\displaystyle {\mathfrak {a}}} in t, A = exp a {\displaystyle {\mathfrak {a}}} is a toral subgroup such that σ( a ) = a −1 on A ; and any two such a {\displaystyle {\mathfrak {a}}} 's are conjugate by an element of K . A similar statement holds for a ∗ = i a {\displaystyle {\mathfrak {a}}^{*}=i{\mathfrak {a}}} . Morevoer if A * = exp a ∗ {\displaystyle {\mathfrak {a}}^{*}} , then
These results are special cases of the Cartan decomposition in any Riemannian symmetric space and its dual. The geodesics emanating from the origin in the homogeneous spaces can be identified with one parameter groups with generators in i m {\displaystyle i{\mathfrak {m}}} or m {\displaystyle {\mathfrak {m}}} . Similar results hold for in the compact case: H = K ⋅ exp i m {\displaystyle i{\mathfrak {m}}} and H = KAK . [ 8 ]
The properties of the totally geodesic subspace A can be shown directly. A is closed because the closure of A is a toral subgroup satisfying σ( a ) = a −1 , so its Lie algebra lies in m {\displaystyle {\mathfrak {m}}} and hence equals a {\displaystyle {\mathfrak {a}}} by maximality. A can be generated topologically by a single element exp X , so a {\displaystyle {\mathfrak {a}}} is the centralizer of X in m {\displaystyle {\mathfrak {m}}} . In the K -orbit of any element of m {\displaystyle {\mathfrak {m}}} there is an element Y such that (X,Ad k Y) is minimized at k = 1. Setting k = exp tT with T in k {\displaystyle {\mathfrak {k}}} , it follows that ( X ,[ T , Y ]) = 0 and hence [ X , Y ] = 0, so that Y must lie in a {\displaystyle {\mathfrak {a}}} . Thus m {\displaystyle {\mathfrak {m}}} is the union of the conjugates of a {\displaystyle {\mathfrak {a}}} . In particular some conjugate of X lies in any other choice of a {\displaystyle {\mathfrak {a}}} , which centralizes that conjugate; so by maximality the only possibilities are conjugates of a {\displaystyle {\mathfrak {a}}} . [ 13 ]
The decompositions
can be proved directly by applying the slice theorem for compact transformation groups to the action of K on H / K . [ 14 ] In fact the space H / K can be identified with
a closed submanifold of H , and the Cartan decomposition follows by showing that M is the union of the kAk −1 for k in K . Since this union is the continuous image of K × A , it is compact and connected. So it suffices to show that the union is open in M and for this it is enough to show each a in A has an open neighbourhood in this union. Now by computing derivatives at 0, the union contains an open neighbourhood of 1. If a is central the union is invariant under multiplication by a , so contains an open neighbourhood of a . If a is not central, write a = b 2 with b in A . Then τ = Ad b − Ad b −1 is a skew-adjoint operator on h {\displaystyle {\mathfrak {h}}} anticommuting with σ, which can be regarded as a Z 2 -grading operator σ on h {\displaystyle {\mathfrak {h}}} . By an Euler–Poincaré characteristic argument it follows that the superdimension of h {\displaystyle {\mathfrak {h}}} coincides with the superdimension of the kernel of τ. In other words,
where k a {\displaystyle {\mathfrak {k}}_{a}} and m a {\displaystyle {\mathfrak {m}}_{a}} are the subspaces fixed by Ad a . Let the orthogonal complement of k a {\displaystyle {\mathfrak {k}}_{a}} in k {\displaystyle {\mathfrak {k}}} be k a ⊥ {\displaystyle {\mathfrak {k}}_{a}^{\perp }} . Computing derivatives, it follows that Ad e X ( a e Y ), where X lies in k a ⊥ {\displaystyle {\mathfrak {k}}_{a}^{\perp }} and Y in m a {\displaystyle {\mathfrak {m}}_{a}} , is an open neighbourhood of a in the union. Here the terms a e Y lie in the union by the argument for central a : indeed a is in the center of the identity component of the centralizer of a which is invariant under σ and contains A .
The dimension of a {\displaystyle {\mathfrak {a}}} is called the rank of the Hermitian symmetric space.
In the case of Hermitian symmetric spaces, Harish-Chandra gave a canonical choice for a {\displaystyle {\mathfrak {a}}} .
This choice of a {\displaystyle {\mathfrak {a}}} is determined by taking a maximal torus T of H in K with Lie algebra t {\displaystyle {\mathfrak {t}}} . Since the symmetry σ is implemented by an element of T lying in the centre of H , the root spaces g α {\displaystyle {\mathfrak {g}}_{\alpha }} in g {\displaystyle {\mathfrak {g}}} are left invariant by σ. It acts as the identity on those contained in k C {\displaystyle {\mathfrak {k}}_{\mathbb {C} }} and minus the identity on those in m C {\displaystyle {\mathfrak {m}}_{\mathbb {C} }} .
The roots with root spaces in k C {\displaystyle {\mathfrak {k}}_{\mathbb {C} }} are called compact roots and those with root spaces in m C {\displaystyle {\mathfrak {m}}_{\mathbb {C} }} are called noncompact roots . (This terminology originates from the symmetric space of noncompact type.) If H is simple, the generator Z of the centre of K can be used to define a set of positive roots, according to the sign of α( Z ). With this choice of roots m + {\displaystyle {\mathfrak {m}}_{+}} and m − {\displaystyle {\mathfrak {m}}_{-}} are the direct sum of the root spaces g α {\displaystyle {\mathfrak {g}}_{\alpha }} over positive and negative noncompact roots α. Root vectors E α can be chosen so that
lie in h {\displaystyle {\mathfrak {h}}} . The simple roots α 1 , ...., α n are the indecomposable positive roots. These can be numbered so that α i vanishes on the center of h {\displaystyle {\mathfrak {h}}} for i , whereas α 1 does not. Thus α 1 is the unique noncompact simple root and the other simple roots are compact. Any positive noncompact root then has the form β = α 1 + c 2 α 2 + ⋅⋅⋅ + c n α n with non-negative coefficients c i . These coefficients lead to a lexicographic order on positive roots. The coefficient of α 1 is always one because m − {\displaystyle {\mathfrak {m}}_{-}} is irreducible for K so is spanned by vectors obtained by successively applying the lowering operators E –α for simple compact roots α.
Two roots α and β are said to be strongly orthogonal if ±α ±β are not roots or zero, written α ≐ β. The highest positive root ψ 1 is noncompact. Take ψ 2 to be the highest noncompact positive root strongly orthogonal to ψ 1 (for the lexicographic order). Then continue in this way taking ψ i + 1 to be the highest noncompact positive root strongly orthogonal to ψ 1 , ..., ψ i until the process terminates. The corresponding vectors
lie in m {\displaystyle {\mathfrak {m}}} and commute by strong orthogonality. Their span a {\displaystyle {\mathfrak {a}}} is Harish-Chandra's canonical maximal Abelian subalgebra. [ 15 ] (As Sugiura later showed, having fixed T , the set of strongly orthogonal roots is uniquely determined up to applying an element in the Weyl group of K . [ 16 ] )
Maximality can be checked by showing that if
for all i , then c α = 0 for all positive noncompact roots α different from the ψ j 's. This follows by showing inductively that if c α ≠ 0, then α is strongly orthogonal to ψ 1 , ψ 2 , ... a contradiction. Indeed, the above relation shows ψ i + α cannot be a root; and that if ψ i – α is a root, then it would necessarily have the form β – ψ i . If ψ i – α were negative, then α would be a higher positive root than ψ i , strongly orthogonal to the ψ j with j < i , which is not possible; similarly if β – ψ i were positive.
Harish-Chandra's canonical choice of a {\displaystyle {\mathfrak {a}}} leads to a polydisk and polysphere theorem in H */ K and H / K . This result reduces the geometry to products of the prototypic example involving SL(2, C ), SU(1,1) and SU(2), namely the unit disk inside the Riemann sphere.
In the case of H = SU(2) the symmetry σ is given by conjugation by the diagonal matrix with entries ± i so that
The fixed point subgroup is the maximal torus T , the diagonal matrices with entries e ± it . SU(2) acts on the Riemann sphere C P 1 {\displaystyle \mathbf {CP} ^{1}} transitively by Möbius transformations and T is the stabilizer of 0. SL(2, C ), the complexification of SU(2), also acts by Möbius transformations and the stabiliser of 0 is the subgroup B of lower triangular matrices. The noncompact subgroup SU(1,1) acts with precisely three orbits: the open unit disk | z | < 1; the unit circle z = 1; and its exterior | z | > 1. Thus
where B + and T C denote the subgroups of upper triangular and diagonal matrices in SL(2, C ). The middle term is the orbit of 0 under the upper unitriangular matrices
Now for each root ψ i there is a homomorphism of π i of SU(2) into H which is compatible with the symmetries. It extends uniquely to a homomorphism of SL(2, C ) into G . The images of the Lie algebras for different ψ i 's commute since they are strongly orthogonal. Thus there is a homomorphism π of the direct product SU(2) r into H compatible with the symmetries. It extends to a homomorphism of SL(2, C ) r into G . The kernel of π is contained in the center (±1) r of SU(2) r which is fixed pointwise by the symmetry. So the image of the center under π lies in K . Thus there is an embedding of the polysphere (SU(2)/T) r into H / K = G / P and the polysphere contains the polydisk (SU(1,1)/T) r . The polysphere and polydisk are the direct product of r copies of the Riemann sphere and the unit disk. By the Cartan decompositions in SU(2) and SU(1,1),
the polysphere is the orbit of T r A in H / K and the polydisk is the orbit of T r A *, where T r = π( T r ) ⊆ K . On the other hand, H = KAK and H * = K A * K .
Hence every element in the compact Hermitian symmetric space H / K is in the K -orbit of a point in the polysphere; and every element in the image under the Borel embedding of the noncompact Hermitian symmetric space H * / K is in the K -orbit of a point in the polydisk. [ 17 ]
H * / K , the Hermitian symmetric space of noncompact type, lies in the image of exp m + {\displaystyle \exp {\mathfrak {m}}_{+}} , a dense open subset of H / K biholomorphic to m + {\displaystyle {\mathfrak {m}}_{+}} . The corresponding domain in m + {\displaystyle {\mathfrak {m}}_{+}} is bounded. This is the Harish-Chandra embedding named after Harish-Chandra .
In fact Harish-Chandra showed the following properties of the space X = exp ( m + ) ⋅ K C ⋅ exp ( m − ) = exp ( m + ) ⋅ P {\displaystyle \mathbf {X} =\exp({\mathfrak {m}}_{+})\cdot K_{\mathbb {C} }\cdot \exp({\mathfrak {m}}_{-})=\exp({\mathfrak {m}}_{+})\cdot P} :
In fact M ± = exp m ± {\displaystyle M_{\pm }=\exp {\mathfrak {m}}_{\pm }} are complex Abelian groups normalised by K C . Moreover, [ m + , m − ] ⊂ k C {\displaystyle [{\mathfrak {m}}_{+},{\mathfrak {m}}_{-}]\subset {\mathfrak {k}}_{\mathfrak {C}}} since [ m , m ] ⊂ k {\displaystyle [{\mathfrak {m}},{\mathfrak {m}}]\subset {\mathfrak {k}}} .
This implies P ∩ M + = {1}. For if x = e X with X in m + {\displaystyle {\mathfrak {m}}_{+}} lies in P , it must normalize M − and hence m − {\displaystyle {\mathfrak {m}}_{-}} . But if Y lies in m − {\displaystyle {\mathfrak {m}}_{-}} , then
so that X commutes with m − {\displaystyle {\mathfrak {m}}_{-}} . But if X commutes with every noncompact root space, it must be 0, so x = 1. It follows that the multiplication map μ on M + × P is injective so (1) follows. Similarly the derivative of μ at ( x , p ) is
which is injective, so (2) follows. For the special case H = SU(2), H * = SU(1,1) and G = SL(2, C ) the remaining assertions are consequences of the identification with the Riemann sphere, C and unit disk. They can be applied to the groups defined for each root ψ i . By the polysphere and polydisk theorem H */ K , X / P and H / K are the union of the K -translates of the polydisk, C r and the polysphere. So H * lies in X , the closure of H */ K is compact in X / P , which is in turn dense in H / K .
Note that (2) and (3) are also consequences of the fact that the image of X in G / P is that of the big cell B + B in the Gauss decomposition of G . [ 18 ]
Using results on the restricted root system of the symmetric spaces H / K and H */ K , Hermann showed that the image of H */ K in m + {\displaystyle {\mathfrak {m}}_{+}} is a generalized unit disk. In fact it is the convex set of X for which the operator norm of ad Im X is less than one. [ 19 ]
A bounded domain Ω in a complex vector space is said to be a bounded symmetric domain if for every x in Ω , there is an involutive biholomorphism σ x of Ω for which x is an isolated fixed point. The Harish-Chandra embedding exhibits every Hermitian symmetric space of noncompact type H * / K as a bounded symmetric domain. The biholomorphism group of H * / K is equal to its isometry group H * .
Conversely every bounded symmetric domain arises in this way. Indeed, given a bounded symmetric domain Ω , the Bergman kernel defines a metric on Ω , the Bergman metric , for which every biholomorphism is an isometry. This realizes Ω as a Hermitian symmetric space of noncompact type. [ 20 ]
The irreducible bounded symmetric domains are called Cartan domains and are classified as follows.
In the classical cases (I–IV), the noncompact group can be realized by 2 × 2 block matrices [ 21 ]
acting by generalized Möbius transformations
The polydisk theorem takes the following concrete form in the classical cases: [ 22 ]
The noncompact group H * acts on the complex Hermitian symmetric space H / K = G / P with only finitely many orbits. The orbit structure is described in detail in Wolf (1972) . In particular the closure of the bounded domain H */ K has a unique closed orbit, which is the Shilov boundary of the domain. In general the orbits are unions of Hermitian symmetric spaces of lower dimension. The complex function theory of the domains, in particular the analogue of the Cauchy integral formulas , are described for the Cartan domains in Hua (1979) . The closure of the bounded domain is the Baily–Borel compactification of H */ K . [ 23 ]
The boundary structure can be described using Cayley transforms . For each copy of SU(2) defined by one of the noncompact roots ψ i , there is a Cayley transform c i which as a Möbius transformation maps the unit disk onto the upper half plane. Given a subset I of indices of the strongly orthogonal family ψ 1 , ..., ψ r , the partial Cayley transform c I is defined as the product of the c i 's with i in I in the product of the groups π i . Let G ( I ) be the centralizer of this product in G and H *( I ) = H * ∩ G ( I ). Since σ leaves H *( I ) invariant, there is a corresponding Hermitian symmetric space M I H *( I )/ H *( I )∩ K ⊂ H */ K = M . The boundary component for the subset I is the union of the K -translates of c I M I . When I is the set of all indices, M I is a single point and the boundary component is the Shilov boundary. Moreover, M I is in the closure of M J if and only if I ⊇ J . [ 24 ]
Every Hermitian symmetric space is a Kähler manifold . They can be defined equivalently as Riemannian symmetric spaces with a parallel complex structure with respect to which the Riemannian metric is Hermitian . The complex structure is automatically preserved by the isometry group H of the metric, and so any Hermitian symmetric space M is a homogeneous complex manifold. Some examples are complex vector spaces and complex projective spaces , with their usual Hermitian metrics and Fubini–Study metrics , and the complex unit balls with suitable metrics so that they become complete and Riemannian symmetric. The compact Hermitian symmetric spaces are projective varieties , and admit a strictly larger Lie group G of biholomorphisms with respect to which they are homogeneous: in fact, they are generalized flag manifolds , i.e., G is semisimple and the stabilizer of a point is a parabolic subgroup P of G . Among (complex) generalized flag manifolds G / P , they are characterized as those for which the nilradical of the Lie algebra of P is abelian. Thus they are contained within the family of symmetric R-spaces which conversely comprises Hermitian symmetric spaces and their real forms. The non-compact Hermitian symmetric spaces can be realized as bounded domains in complex vector spaces.
Although the classical Hermitian symmetric spaces can be constructed by ad hoc methods, Jordan triple systems , or equivalently Jordan pairs, provide a uniform algebraic means of describing all the basic properties connected with a Hermitian symmetric space of compact type and its non-compact dual. This theory is described in detail in Koecher (1969) and Loos (1977) and summarized in Satake (1981) . The development is in the reverse order from that using the structure theory of compact Lie groups. It starting point is the Hermitian symmetric space of noncompact type realized as a bounded symmetric domain. It can be described in terms of a Jordan pair or hermitian Jordan triple system . This Jordan algebra structure can be used to reconstruct the dual Hermitian symmetric space of compact type, including in particular all the associated Lie algebras and Lie groups.
The theory is easiest to describe when the irreducible compact Hermitian symmetric space is of tube type. In that case the space is determined by a simple real Lie algebra g {\displaystyle {\mathfrak {g}}} with negative definite Killing form. It must admit an action of SU(2) which only acts via the trivial and adjoint representation, both types occurring. Since g {\displaystyle {\mathfrak {g}}} is simple, this action is inner, so implemented by an inclusion of the Lie algebra of SU(2) in g {\displaystyle {\mathfrak {g}}} . The complexification of g {\displaystyle {\mathfrak {g}}} decomposes as a direct sum of three eigenspaces for the diagonal matrices in SU(2). It is a three-graded complex Lie algebra, with the Weyl group element of SU(2) providing the involution. Each of the ±1 eigenspaces has the structure of a unital complex Jordan algebra explicitly arising as the complexification of a Euclidean Jordan algebra. It can be identified with the multiplicity space of the adjoint representation of SU(2) in g {\displaystyle {\mathfrak {g}}} .
The description of irreducible Hermitian symmetric spaces of tube type starts from a simple Euclidean Jordan algebra E . It admits Jordan frames , i.e. sets of orthogonal minimal idempotents e 1 , ..., e m . Any two are related by an automorphism of E , so that the integer m is an invariant called the rank of E . Moreover, if A is the complexification of E , it has a unitary structure group . It is a subgroup of GL( A ) preserving the natural complex inner product on A . Any element a in A has a polar decomposition a = u Σ α i a i with α i ≥ 0 . The spectral norm is defined by ||a|| = sup α i . The associated bounded symmetric domain is just the open unit ball D in A . There is a biholomorphism between D and the tube domain T = E + iC where C is the open self-dual convex cone of elements in E of the form a = u Σ α i a i with u an automorphism of E and α i > 0. This gives two descriptions of the Hermitian symmetric space of noncompact type. There is a natural way of using mutations of the Jordan algebra A to compactify the space A . The compactification X is a complex manifold and the finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} of holomorphic vector fields on X can be determined explicitly. One parameter groups of biholomorphisms can be defined such that the corresponding holomorphic vector fields span g {\displaystyle {\mathfrak {g}}} . This includes the group of all complex Möbius transformations corresponding to matrices in SL(2, C ). The subgroup SU(1,1) leaves invariant the unit ball and its closure. The subgroup SL(2, R ) leaves invariant the tube domain and its closure. The usual Cayley transform and its inverse, mapping the unit disk in C to the upper half plane, establishes analogous maps between D and T . The polydisk corresponds to the real and complex Jordan subalgebras generated by a fixed Jordan frame. It admits a transitive action of SU(2) m and this action extends to X . The group G generated by the one-parameter groups of biholomorphisms acts faithfully on g {\displaystyle {\mathfrak {g}}} . The subgroup generated by the identity component K of the unitary structure group and the operators in SU(2) m . It defines a compact Lie group H which acts transitively on X . Thus H / K is the corresponding Hermitian symmetric space of compact type. The group G can be identified with the complexification of H . The subgroup H * leaving D invariant is a noncompact real form of G . It acts transitively on D so that H * / K is the dual Hermitian symmetric space of noncompact type. The inclusions D ⊂ A ⊂ X reproduce the Borel and Harish-Chandra embeddings. The classification of Hermitian symmetric spaces of tube type reduces to that of simple Euclidean Jordan algebras. These were classified by Jordan, von Neumann & Wigner (1934) in terms of Euclidean Hurwitz algebras , a special type of composition algebra .
In general a Hermitian symmetric space gives rise to a 3-graded Lie algebra with a period 2 conjugate linear automorphism switching the parts of degree ±1 and preserving the degree 0 part. This gives rise to the structure of a Jordan pair or hermitian Jordan triple system , to which Loos (1977) extended the theory of Jordan algebras. All irreducible Hermitian symmetric spaces can be constructed uniformly within this framework. Koecher (1969) constructed the irreducible Hermitian symmetric space of non-tube type from a simple Euclidean Jordan algebra together with a period 2 automorphism. The −1 eigenspace of the automorphism has the structure of a Jordan pair, which can be deduced from that of the larger Jordan algebra. In the non-tube type case corresponding to a Siegel domain of type II, there is no distinguished subgroup of real or complex Möbius transformations. For irreducible Hermitian symmetric spaces, tube type is characterized by the real dimension of the Shilov boundary S being equal to the complex dimension of D . | https://en.wikipedia.org/wiki/Hermitian_symmetric_space |
The heroic theory of invention and scientific development is the view that the principal authors of inventions and scientific discoveries are unique heroic individuals—i.e., "great scientists" or "geniuses". [ 1 ]
A competing hypothesis (that of multiple discovery ) is that most inventions and scientific discoveries are made independently and simultaneously by multiple inventors and scientists.
The multiple-discovery hypothesis may be most patently exemplified in the evolution of mathematics , since mathematical knowledge is highly unified and any advances need, as a general rule, to be built from previously established results through a process of deduction. Thus, the development of infinitesimal calculus into a systematic discipline did not occur until the development of analytic geometry , the former being credited to both Sir Isaac Newton and Gottfried Leibniz , and the latter to both René Descartes and Pierre de Fermat .
This philosophy of science -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heroic_theory_of_invention_and_scientific_development |
In geometry , Heron's formula (or Hero's formula ) gives the area of a triangle in terms of the three side lengths a , {\displaystyle a,} b , {\displaystyle b,} c . {\displaystyle c.} Letting s {\displaystyle s} be the semiperimeter of the triangle, s = 1 2 ( a + b + c ) , {\displaystyle s={\tfrac {1}{2}}(a+b+c),} the area A {\displaystyle A} is [ 1 ]
A = s ( s − a ) ( s − b ) ( s − c ) . {\displaystyle A={\sqrt {s(s-a)(s-b)(s-c)}}.}
It is named after first-century engineer Heron of Alexandria (or Hero) who proved it in his work Metrica , though it was probably known centuries earlier.
Let △ A B C {\displaystyle \triangle ABC} be the triangle with sides a = 4 {\displaystyle a=4} , b = 13 {\displaystyle b=13} , and c = 15 {\displaystyle c=15} .
This triangle's semiperimeter is s = 1 2 ( a + b + c ) = {\displaystyle s={\tfrac {1}{2}}(a+b+c)={}} 1 2 ( 4 + 13 + 15 ) = 16 {\displaystyle {\tfrac {1}{2}}(4+13+15)=16} therefore s − a = 12 {\displaystyle s-a=12} , s − b = 3 {\displaystyle s-b=3} , s − c = 1 {\displaystyle s-c=1} , and the area is A = s ( s − a ) ( s − b ) ( s − c ) = 16 ⋅ 12 ⋅ 3 ⋅ 1 ) = 24. {\displaystyle {\begin{aligned}A&={\textstyle {\sqrt {s(s-a)(s-b)(s-c)}}}\\[3mu]&={\textstyle {\sqrt {16\cdot 12\cdot 3\cdot 1{\vphantom {)}}}}}\\[3mu]&=24.\end{aligned}}}
In this example, the triangle's side lengths and area are integers , making it a Heronian triangle . However, Heron's formula works equally well when the side lengths are real numbers . As long as they obey the strict triangle inequality , they define a triangle in the Euclidean plane whose area is a positive real number.
Heron's formula can also be written in terms of just the side lengths instead of using the semiperimeter, in several ways,
A = 1 4 ( a + b + c ) ( − a + b + c ) ( a − b + c ) ( a + b − c ) = 1 4 2 ( a 2 b 2 + a 2 c 2 + b 2 c 2 ) − ( a 4 + b 4 + c 4 ) = 1 4 ( a 2 + b 2 + c 2 ) 2 − 2 ( a 4 + b 4 + c 4 ) = 1 4 4 ( a 2 b 2 + a 2 c 2 + b 2 c 2 ) − ( a 2 + b 2 + c 2 ) 2 = 1 4 4 a 2 b 2 − ( a 2 + b 2 − c 2 ) 2 . {\displaystyle {\begin{aligned}A&={\tfrac {1}{4}}{\sqrt {(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {2(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})-(a^{4}+b^{4}+c^{4})}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {(a^{2}+b^{2}+c^{2})^{2}-2(a^{4}+b^{4}+c^{4})}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {4(a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2})-(a^{2}+b^{2}+c^{2})^{2}}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {4a^{2}b^{2}-(a^{2}+b^{2}-c^{2})^{2}}}.\end{aligned}}}
After expansion, the expression under the square root is a quadratic polynomial of the squared side lengths a 2 {\displaystyle a^{2}} , b 2 {\displaystyle b^{2}} , c 2 {\displaystyle c^{2}} .
The same relation can be expressed using the Cayley–Menger determinant , [ 3 ]
− 16 A 2 = | 0 a 2 b 2 1 a 2 0 c 2 1 b 2 c 2 0 1 1 1 1 0 | . {\displaystyle -16A^{2}={\begin{vmatrix}0&a^{2}&b^{2}&1\\a^{2}&0&c^{2}&1\\b^{2}&c^{2}&0&1\\1&1&1&0\end{vmatrix}}.}
The formula is credited to Heron (or Hero) of Alexandria ( fl. 60 AD), [ 4 ] and a proof can be found in his book Metrica . Mathematical historian Thomas Heath suggested that Archimedes knew the formula over two centuries earlier, [ 5 ] and since Metrica is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work. [ 6 ]
A formula equivalent to Heron's was discovered by Chinese mathematician Qin Jiushao:
A = 1 2 a 2 c 2 − ( a 2 + c 2 − b 2 2 ) 2 , {\displaystyle A={\frac {1}{2}}{\sqrt {a^{2}c^{2}-\left({\frac {a^{2}+c^{2}-b^{2}}{2}}\right)^{2}}},}
published in Mathematical Treatise in Nine Sections ( Qin Jiushao , 1247). [ 7 ]
There are many ways to prove Heron's formula, for example using trigonometry as below, or the incenter and one excircle of the triangle, [ 8 ] or as a special case of De Gua's theorem (for the particular case of acute triangles), [ 9 ] or as a special case of Brahmagupta's formula (for the case of a degenerate cyclic quadrilateral).
A modern proof, which uses algebra and is quite different from the one provided by Heron, follows. [ 10 ] Let a , {\displaystyle a,} b , {\displaystyle b,} c {\displaystyle c} be the sides of the triangle and α , {\displaystyle \alpha ,} β , {\displaystyle \beta ,} γ {\displaystyle \gamma } the angles opposite those sides.
Applying the law of cosines we get
cos γ = a 2 + b 2 − c 2 2 a b {\displaystyle \cos \gamma ={\frac {a^{2}+b^{2}-c^{2}}{2ab}}}
From this proof, we get the algebraic statement that
sin γ = 1 − cos 2 γ = 4 a 2 b 2 − ( a 2 + b 2 − c 2 ) 2 2 a b . {\displaystyle \sin \gamma ={\sqrt {1-\cos ^{2}\gamma }}={\frac {\sqrt {4a^{2}b^{2}-(a^{2}+b^{2}-c^{2})^{2}}}{2ab}}.}
The altitude of the triangle on base a {\displaystyle a} has length b sin γ {\displaystyle b\sin \gamma } , and it follows
A = 1 2 ( base ) ( altitude ) = 1 2 a b sin γ = a b 4 a b 4 a 2 b 2 − ( a 2 + b 2 − c 2 ) 2 = 1 4 − a 4 − b 4 − c 4 + 2 a 2 b 2 + 2 a 2 c 2 + 2 b 2 c 2 = 1 4 ( a + b + c ) ( − a + b + c ) ( a − b + c ) ( a + b − c ) = ( a + b + c 2 ) ( − a + b + c 2 ) ( a − b + c 2 ) ( a + b − c 2 ) = s ( s − a ) ( s − b ) ( s − c ) . {\displaystyle {\begin{aligned}A&={\tfrac {1}{2}}({\mbox{base}})({\mbox{altitude}})\\[6mu]&={\tfrac {1}{2}}ab\sin \gamma \\[6mu]&={\frac {ab}{4ab}}{\sqrt {4a^{2}b^{2}-(a^{2}+b^{2}-c^{2})^{2}}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {-a^{4}-b^{4}-c^{4}+2a^{2}b^{2}+2a^{2}c^{2}+2b^{2}c^{2}}}\\[6mu]&={\tfrac {1}{4}}{\sqrt {(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}}\\[6mu]&={\sqrt {\left({\frac {a+b+c}{2}}\right)\left({\frac {-a+b+c}{2}}\right)\left({\frac {a-b+c}{2}}\right)\left({\frac {a+b-c}{2}}\right)}}\\[6mu]&={\sqrt {s(s-a)(s-b)(s-c)}}.\end{aligned}}}
The following proof is very similar to one given by Raifaizen. [ 11 ] By the Pythagorean theorem we have b 2 = h 2 + d 2 {\displaystyle b^{2}=h^{2}+d^{2}} and a 2 = h 2 + ( c − d ) 2 {\displaystyle a^{2}=h^{2}+(c-d)^{2}} according to the figure at the right. Subtracting these yields a 2 − b 2 = c 2 − 2 c d . {\displaystyle a^{2}-b^{2}=c^{2}-2cd.} This equation allows us to express d {\displaystyle d} in terms of the sides of the triangle: d = − a 2 + b 2 + c 2 2 c . {\displaystyle d={\frac {-a^{2}+b^{2}+c^{2}}{2c}}.} For the height of the triangle we have that h 2 = b 2 − d 2 . {\displaystyle h^{2}=b^{2}-d^{2}.} By replacing d {\displaystyle d} with the formula given above and applying the difference of squares identity we get h 2 = b 2 − ( − a 2 + b 2 + c 2 2 c ) 2 = ( 2 b c − a 2 + b 2 + c 2 ) ( 2 b c + a 2 − b 2 − c 2 ) 4 c 2 = ( ( b + c ) 2 − a 2 ) ( a 2 − ( b − c ) 2 ) 4 c 2 = ( b + c − a ) ( b + c + a ) ( a + b − c ) ( a − b + c ) 4 c 2 = 2 ( s − a ) ⋅ 2 s ⋅ 2 ( s − c ) ⋅ 2 ( s − b ) 4 c 2 = 4 s ( s − a ) ( s − b ) ( s − c ) c 2 . {\displaystyle {\begin{aligned}h^{2}&=b^{2}-\left({\frac {-a^{2}+b^{2}+c^{2}}{2c}}\right)^{2}\\&={\frac {(2bc-a^{2}+b^{2}+c^{2})(2bc+a^{2}-b^{2}-c^{2})}{4c^{2}}}\\&={\frac {{\big (}(b+c)^{2}-a^{2}{\big )}{\big (}a^{2}-(b-c)^{2}{\big )}}{4c^{2}}}\\&={\frac {(b+c-a)(b+c+a)(a+b-c)(a-b+c)}{4c^{2}}}\\&={\frac {2(s-a)\cdot 2s\cdot 2(s-c)\cdot 2(s-b)}{4c^{2}}}\\&={\frac {4s(s-a)(s-b)(s-c)}{c^{2}}}.\end{aligned}}}
We now apply this result to the formula that calculates the area of a triangle from its height: A = c h 2 = c 2 4 ⋅ 4 s ( s − a ) ( s − b ) ( s − c ) c 2 = s ( s − a ) ( s − b ) ( s − c ) . {\displaystyle {\begin{aligned}A&={\frac {ch}{2}}\\&={\sqrt {{\frac {c^{2}}{4}}\cdot {\frac {4s(s-a)(s-b)(s-c)}{c^{2}}}}}\\&={\sqrt {s(s-a)(s-b)(s-c)}}.\end{aligned}}}
If r {\displaystyle r} is the radius of the incircle of the triangle, then the triangle can be broken into three triangles of equal altitude r {\displaystyle r} and bases a , {\displaystyle a,} b , {\displaystyle b,} and c . {\displaystyle c.} Their combined area is A = 1 2 a r + 1 2 b r + 1 2 c r = r s , {\displaystyle A={\tfrac {1}{2}}ar+{\tfrac {1}{2}}br+{\tfrac {1}{2}}cr=rs,} where s = 1 2 ( a + b + c ) {\displaystyle s={\tfrac {1}{2}}(a+b+c)} is the semiperimeter.
The triangle can alternately be broken into six triangles (in congruent pairs) of altitude r {\displaystyle r} and bases s − a , {\displaystyle s-a,} s − b , {\displaystyle s-b,} and s − c {\displaystyle s-c} of combined area (see law of cotangents ) A = r ( s − a ) + r ( s − b ) + r ( s − c ) = r 2 ( s − a r + s − b r + s − c r ) = r 2 ( cot α 2 + cot β 2 + cot γ 2 ) = r 2 ( cot α 2 cot β 2 cot γ 2 ) = r 2 ( s − a r ⋅ s − b r ⋅ s − c r ) = ( s − a ) ( s − b ) ( s − c ) r . {\displaystyle {\begin{aligned}A&=r(s-a)+r(s-b)+r(s-c)\\[2mu]&=r^{2}\left({\frac {s-a}{r}}+{\frac {s-b}{r}}+{\frac {s-c}{r}}\right)\\[2mu]&=r^{2}\left(\cot {\frac {\alpha }{2}}+\cot {\frac {\beta }{2}}+\cot {\frac {\gamma }{2}}\right)\\[3mu]&=r^{2}\left(\cot {\frac {\alpha }{2}}\cot {\frac {\beta }{2}}\cot {\frac {\gamma }{2}}\right)\\[3mu]&=r^{2}\left({\frac {s-a}{r}}\cdot {\frac {s-b}{r}}\cdot {\frac {s-c}{r}}\right)\\[3mu]&={\frac {(s-a)(s-b)(s-c)}{r}}.\end{aligned}}}
The middle step above is cot α 2 + cot β 2 + cot γ 2 = {\textstyle \cot {\tfrac {\alpha }{2}}+\cot {\tfrac {\beta }{2}}+\cot {\tfrac {\gamma }{2}}={}} cot α 2 cot β 2 cot γ 2 , {\displaystyle \cot {\tfrac {\alpha }{2}}\cot {\tfrac {\beta }{2}}\cot {\tfrac {\gamma }{2}},} the triple cotangent identity , which applies because the sum of half-angles is α 2 + β 2 + γ 2 = π 2 . {\textstyle {\tfrac {\alpha }{2}}+{\tfrac {\beta }{2}}+{\tfrac {\gamma }{2}}={\tfrac {\pi }{2}}.}
Combining the two, we get A 2 = s ( s − a ) ( s − b ) ( s − c ) , {\displaystyle A^{2}=s(s-a)(s-b)(s-c),} from which the result follows.
Heron's formula as given above is numerically unstable for triangles with a very small angle when using floating-point arithmetic . A stable alternative involves arranging the lengths of the sides so that a ≥ b ≥ c {\displaystyle a\geq b\geq c} and computing [ 12 ] [ 13 ] A = 1 4 ( a + ( b + c ) ) ( c − ( a − b ) ) ( c + ( a − b ) ) ( a + ( b − c ) ) . {\displaystyle A={\tfrac {1}{4}}{\sqrt {{\big (}a+(b+c){\big )}{\big (}c-(a-b){\big )}{\big (}c+(a-b){\big )}{\big (}a+(b-c){\big )}}}.} The extra brackets indicate the order of operations required to achieve numerical stability in the evaluation.
Three other formulae for the area of a general triangle have a similar structure as Heron's formula, expressed in terms of different variables.
First, if m a , {\displaystyle m_{a},} m b , {\displaystyle m_{b},} and m c {\displaystyle m_{c}} are the medians from sides a , {\displaystyle a,} b , {\displaystyle b,} and c {\displaystyle c} respectively, and their semi-sum is σ = 1 2 ( m a + m b + m c ) , {\displaystyle \sigma ={\tfrac {1}{2}}(m_{a}+m_{b}+m_{c}),} then [ 14 ] A = 4 3 σ ( σ − m a ) ( σ − m b ) ( σ − m c ) . {\displaystyle A={\frac {4}{3}}{\sqrt {\sigma (\sigma -m_{a})(\sigma -m_{b})(\sigma -m_{c})}}.}
Next, if h a {\displaystyle h_{a}} , h b {\displaystyle h_{b}} , and h c {\displaystyle h_{c}} are the altitudes from sides a , {\displaystyle a,} b , {\displaystyle b,} and c {\displaystyle c} respectively, and semi-sum of their reciprocals is H = 1 2 ( h a − 1 + h b − 1 + h c − 1 ) , {\displaystyle H={\tfrac {1}{2}}{\bigl (}h_{a}^{-1}+h_{b}^{-1}+h_{c}^{-1}{\bigr )},} then [ 15 ] A − 1 = 4 H ( H − h a − 1 ) ( H − h b − 1 ) ( H − h c − 1 ) . {\displaystyle A^{-1}=4{\sqrt {H{\bigl (}H-h_{a}^{-1}{\bigr )}{\bigl (}H-h_{b}^{-1}{\bigr )}{\bigl (}H-h_{c}^{-1}{\bigr )}}}.}
Finally, if α , {\displaystyle \alpha ,} β , {\displaystyle \beta ,} and γ {\displaystyle \gamma } are the three angle measures of the triangle, and the semi-sum of their sines is S = 1 2 ( sin α + sin β + sin γ ) , {\displaystyle S={\tfrac {1}{2}}(\sin \alpha +\sin \beta +\sin \gamma ),} then [ 16 ] [ 17 ] A = D 2 S ( S − sin α ) ( S − sin β ) ( S − sin γ ) = 1 2 D 2 sin α sin β sin γ , {\displaystyle {\begin{aligned}A&=D^{2}{\sqrt {S(S-\sin \alpha )(S-\sin \beta )(S-\sin \gamma )}}\\[5mu]&={\tfrac {1}{2}}D^{2}\sin \alpha \,\sin \beta \,\sin \gamma ,\end{aligned}}}
where D {\displaystyle D} is the diameter of the circumcircle , D = a / sin α = b / sin β = c / sin γ . {\displaystyle D=a/{\sin \alpha }=b/{\sin \beta }=c/{\sin \gamma }.} This last formula coincides with the standard Heron formula when the circumcircle has unit diameter.
Heron's formula is a special case of Brahmagupta's formula for the area of a cyclic quadrilateral . Heron's formula and Brahmagupta's formula are both special cases of Bretschneider's formula for the area of a quadrilateral . Heron's formula can be obtained from Brahmagupta's formula or Bretschneider's formula by setting one of the sides of the quadrilateral to zero.
Brahmagupta's formula gives the area K {\displaystyle K} of a cyclic quadrilateral whose sides have lengths a , {\displaystyle a,} b , {\displaystyle b,} c , {\displaystyle c,} d {\displaystyle d} as
K = ( s − a ) ( s − b ) ( s − c ) ( s − d ) {\displaystyle K={\sqrt {(s-a)(s-b)(s-c)(s-d)}}}
where s = 1 2 ( a + b + c + d ) {\displaystyle s={\tfrac {1}{2}}(a+b+c+d)} is the semiperimeter .
Heron's formula is also a special case of the formula for the area of a trapezoid or trapezium based only on its sides. Heron's formula is obtained by setting the smaller parallel side to zero.
Expressing Heron's formula with a Cayley–Menger determinant in terms of the squares of the distances between the three given vertices, A = 1 4 − | 0 a 2 b 2 1 a 2 0 c 2 1 b 2 c 2 0 1 1 1 1 0 | {\displaystyle A={\frac {1}{4}}{\sqrt {-{\begin{vmatrix}0&a^{2}&b^{2}&1\\a^{2}&0&c^{2}&1\\b^{2}&c^{2}&0&1\\1&1&1&0\end{vmatrix}}}}} illustrates its similarity to Tartaglia's formula for the volume of a three-simplex .
Another generalization of Heron's formula to pentagons and hexagons inscribed in a circle was discovered by David P. Robbins . [ 18 ]
If one of three given lengths is equal to the sum of the other two, the three sides determine a degenerate triangle , a line segment with zero area. In this case, the semiperimeter will equal the longest side, causing Heron's formula to equal zero.
If one of three given lengths is greater than the sum of the other two, then they violate the triangle inequality and do not describe the sides of a Euclidean triangle. In this case, Heron's formula gives an imaginary result. For example if a = 3 {\displaystyle a=3} and b = c = 1 {\displaystyle b=c=1} , then A = 3 5 4 i {\displaystyle \textstyle A={\tfrac {3{\sqrt {5}}}{4}}i} . This can be interpreted using a triangle in the complex coordinate plane C 2 {\displaystyle \mathbb {C} ^{2}} , where "area" can be a complex-valued quantity, or as a triangle lying in a pseudo-Euclidean plane with one space-like dimension and one time-like dimension. [ 19 ]
If U , {\displaystyle U,} V , {\displaystyle V,} W , {\displaystyle W,} u , {\displaystyle u,} v , {\displaystyle v,} w {\displaystyle w} are lengths of edges of the tetrahedron (first three form a triangle; u {\displaystyle u} opposite to U {\displaystyle U} and so on), then [ 20 ] volume = ( − a + b + c + d ) ( a − b + c + d ) ( a + b − c + d ) ( a + b + c − d ) 192 u v w {\displaystyle {\text{volume}}={\frac {\sqrt {\,(-a+b+c+d)\,(a-b+c+d)\,(a+b-c+d)\,(a+b+c-d)}}{192\,u\,v\,w}}} where a = x Y Z b = y Z X c = z X Y d = x y z X = ( w − U + v ) ( U + v + w ) x = ( U − v + w ) ( v − w + U ) Y = ( u − V + w ) ( V + w + u ) y = ( V − w + u ) ( w − u + V ) Z = ( v − W + u ) ( W + u + v ) z = ( W − u + v ) ( u − v + W ) . {\displaystyle {\begin{aligned}a&={\sqrt {xYZ}}\\b&={\sqrt {yZX}}\\c&={\sqrt {zXY}}\\d&={\sqrt {xyz}}\\X&=(w-U+v)\,(U+v+w)\\x&=(U-v+w)\,(v-w+U)\\Y&=(u-V+w)\,(V+w+u)\\y&=(V-w+u)\,(w-u+V)\\Z&=(v-W+u)\,(W+u+v)\\z&=(W-u+v)\,(u-v+W).\end{aligned}}}
L'Huilier's formula relates the area of a triangle in spherical geometry to its side lengths. For a spherical triangle with side lengths a , {\displaystyle a,} b , {\displaystyle b,} and c {\displaystyle c} , semiperimeter s = 1 2 ( a + b + c ) {\displaystyle s={\tfrac {1}{2}}(a+b+c)} , and area S {\displaystyle S} , [ 21 ] tan 2 S 4 = tan s 2 tan s − a 2 tan s − b 2 tan s − c 2 {\displaystyle \tan ^{2}{\frac {S}{4}}=\tan {\frac {s}{2}}\tan {\frac {s-a}{2}}\tan {\frac {s-b}{2}}\tan {\frac {s-c}{2}}}
For a triangle in hyperbolic geometry the analogous formula is tan 2 S 4 = tanh s 2 tanh s − a 2 tanh s − b 2 tanh s − c 2 . {\displaystyle \tan ^{2}{\frac {S}{4}}=\tanh {\frac {s}{2}}\tanh {\frac {s-a}{2}}\tanh {\frac {s-b}{2}}\tanh {\frac {s-c}{2}}.} | https://en.wikipedia.org/wiki/Heron's_formula |
Heron's fountain is a hydraulic machine invented by the 1st century AD inventor, mathematician, and physicist Heron (or Hero) of Alexandria . [ 1 ]
Heron studied the pressure of air and steam, described the first steam engine , and built toys that would spurt water, one of them known as Heron's fountain. Various versions of Heron's fountain are used today in physics classes as a demonstration of principles of hydraulics and pneumatics .
In the following description, call the 3 containers:
And three pipes:
Container A can be closed and airtight, but it is not necessary. B and C, however, must be airtight and resistant to atmospheric pressure. Plastic bottles suffice, but glass containers work better. Balloons do not work because they cannot hold pressure without deforming. The fountain works in the following way:
These principles explain the construction:
Heron's fountain is not a perpetual motion machine. [ 2 ] If the nozzle of the spout is narrow, it may play for several minutes, but it eventually comes to a stop. The water coming out of the tube may go higher than the level in any container, but the net flow of water is downward. If, however, the volumes of the air supply and fountain supply containers are designed to be much larger than the volume of the basin, with the flow rate of water from the nozzle of the spout being held constant, the fountain could operate for a far greater time interval.
Its action may seem less paradoxical if considered as a siphon , but with the upper arch of the tube removed, and the air pressure between the two lower containers providing the positive pressure to lift the water over the arch. The device is also known as Heron's siphon.
The gravitational potential energy of the water which falls a long way from the basin into the lower container is transferred by pneumatic pressure tube (only air is moved upwards at this stage) to push the water from the upper container a short way above the basin.
The fountain can spout (almost) as high above the upper container as the water falls from the basin into the lower container. For maximum effect, place the upper container as closely beneath the basin as possible and place the lower container a long way beneath both.
As soon as the water level in the upper container has dropped so low that the water bearing tube no longer touches the water surface, the fountain stops. In order to make the fountain play again, the air supply container is emptied of water, and the fountain supply container and the basin are refilled. Lifting the water provides the energy required.
As previously mentioned, the fountain stops working when water from B has dropped to C. There are ways, however, to make it work again, such as:
There also exist fountains with two liquids of different colors and density, such as the Halite fountain. [ 3 ]
An example of Heron's fountain, built by Larry Fleinhardt , was featured in the 8th episode (titled "Tabu") of the 4th season of the television show Numb3rs .
Heron's fountain was featured in the first episode of How Britain Worked hosted by Guy Martin . | https://en.wikipedia.org/wiki/Heron's_fountain |
In geometry , a Heronian triangle (or Heron triangle ) is a triangle whose side lengths a , b , and c and area A are all positive integers . [ 1 ] [ 2 ] Heronian triangles are named after Heron of Alexandria , based on their relation to Heron's formula which Heron demonstrated with the example triangle of sides 13, 14, 15 and area 84 . [ 3 ]
Heron's formula implies that the Heronian triangles are exactly the positive integer solutions of the Diophantine equation
that is, the side lengths and area of any Heronian triangle satisfy the equation, and any positive integer solution of the equation describes a Heronian triangle. [ 4 ]
If the three side lengths are setwise coprime (meaning that the greatest common divisor of all three sides is 1), the Heronian triangle is called primitive .
Triangles whose side lengths and areas are all rational numbers (positive rational solutions of the above equation) are sometimes also called Heronian triangles or rational triangles ; [ 5 ] in this article, these more general triangles will be called rational Heronian triangles . Every (integral) Heronian triangle is a rational Heronian triangle. Conversely, every rational Heronian triangle is geometrically similar to exactly one primitive Heronian triangle.
In any rational Heronian triangle, the three altitudes , the circumradius , the inradius and exradii , and the sines and cosines of the three angles are also all rational numbers.
Scaling a triangle with a factor of s consists of multiplying its side lengths by s ; this multiplies the area by s 2 {\displaystyle s^{2}} and produces a similar triangle. Scaling a rational Heronian triangle by a rational factor produces another rational Heronian triangle.
Given a rational Heronian triangle of side lengths p d , q d , r d , {\textstyle {\frac {p}{d}},{\frac {q}{d}},{\frac {r}{d}},} the scale factor d gcd ( p , q , r ) {\textstyle {\frac {d}{\gcd(p,q,r)}}} produces a rational Heronian triangle such that its side lengths a , b , c {\textstyle a,b,c} are setwise coprime integers . It is proved below that the area A is an integer, and thus the triangle is a Heronian triangle. Such a triangle is often called a primitive Heronian triangle.
In summary, every similarity class of rational Heronian triangles contains exactly one primitive Heronian triangle. A byproduct of the proof is that exactly one of the side lengths of a primitive Heronian triangle is an even integer.
Proof: One has to prove that, if the side lengths a , b , c {\textstyle a,b,c} of a rational Heronian triangle are coprime integers, then the area A is also an integer and exactly one of the side lengths is even.
The Diophantine equation given in the introduction shows immediately that 16 A 2 {\displaystyle 16A^{2}} is an integer. Its square root 4 A {\displaystyle 4A} is also an integer, since the square root of an integer is either an integer or an irrational number .
If exactly one of the side lengths is even, all the factors in the right-hand side of the equation are even, and, by dividing the equation by 16 , one gets that A 2 {\displaystyle A^{2}} and A {\displaystyle A} are integers.
As the side lengths are supposed to be coprime, one is left with the case where one or three side lengths are odd. Supposing that c is odd, the right-hand side of the Diophantine equation can be rewritten
with a + b {\displaystyle a+b} and a − b {\displaystyle a-b} even. As the square of an odd integer is congruent to 1 {\displaystyle 1} modulo 4 , the right-hand side of the equation must be congruent to − 1 {\displaystyle -1} modulo 4 . It is thus impossible, that one has a solution of the Diophantine equation, since 16 A 2 {\displaystyle 16A^{2}} must be the square of an integer, and the square of an integer is congruent to 0 or 1 modulo 4 .
Any Pythagorean triangle is a Heronian triangle. The side lengths of such a triangle are integers , by definition. In any such triangle, one of the two shorter sides has even length, so the area (the product of these two sides, divided by two) is also an integer.
Examples of Heronian triangles that are not right-angled are the isosceles triangle obtained by joining a Pythagorean triangle and its mirror image along a side of the right angle. Starting with the Pythagorean triple 3, 4, 5 this gives two Heronian triangles with side lengths (5, 5, 6) and (5, 5, 8) and area 12 .
More generally, given two Pythagorean triples ( a , b , c ) {\displaystyle (a,b,c)} and ( a , d , e ) {\displaystyle (a,d,e)} with largest entries c and e , one can join the corresponding triangles along the sides of length a (see the figure) for getting a Heronian triangle with side lengths c , e , b + d {\displaystyle c,e,b+d} and area 1 2 a ( b + d ) {\textstyle {\tfrac {1}{2}}a(b+d)} (this is an integer, since the area of a Pythagorean triangle is an integer).
There are Heronian triangles that cannot be obtained by joining Pythagorean triangles. For example, the Heronian triangle of side lengths 5 , 29 , 30 {\displaystyle 5,29,30} and area 72, since none of its altitudes is an integer. Such Heronian triangles are known as indecomposable . [ 6 ] However, every Heronian triangle can be constructed from right triangles with rational side lengths, and is thus similar to a decomposable Heronian triangle. In fact, at least one of the altitudes of a triangle is inside the triangle, and divides it into two right triangles. These triangles have rational sides, since the cosine and the sine of the angles of a Heronian triangle are rational numbers, and, with notation of the figure, one has a = c sin α {\displaystyle a=c\sin \alpha } and b = c cos α , {\displaystyle b=c\cos \alpha ,} where α {\displaystyle \alpha } is the left-most angle of the triangle.
Many quantities related to a Heronian triangle are rational numbers. In particular:
Here are some properties of side lengths of Heronian triangles, whose side lengths are a , b , c and area is A .
A parametric equation or parametrization of Heronian triangles consists of an expression of the side lengths and area of a triangle as functions—typically polynomial functions —of some parameters, such that the triangle is Heronian if and only if the parameters satisfy some constraints—typically, to be positive integers satisfying some inequalities. It is also generally required that all Heronian triangles can be obtained up to a scaling for some values of the parameters, and that these values are unique, if an order on the sides of the triangle is specified.
The first such parametrization was discovered by Brahmagupta (598-668 A.D.), who did not prove that all Heronian triangles can be generated by the parametrization. In the 18th century, Leonhard Euler provided another parametrization and proved that it generates all Heronian triangles. These parametrizations are described in the next two subsections.
In the third subsection, a rational parametrization—that is a parametrization where the parameters are positive rational numbers —is naturally derived from properties of Heronian triangles. Both Brahmagupta's and Euler's parametrizations can be recovered from this rational parametrization by clearing denominators . This provides a proof that Brahmagupta's and Euler's parametrizations generate all Heronian triangles.
The Indian mathematician Brahmagupta (598-668 A.D.) discovered the following parametric equations for generating Heronian triangles, [ 20 ] but did not prove that every similarity class of Heronian triangles can be obtained this way. [ citation needed ]
For three positive integers m , n and k that are setwise coprime ( gcd ( m , n , k ) = 1 {\displaystyle \gcd(m,n,k)=1} ) and satisfy m n > k 2 {\displaystyle mn>k^{2}} (to guarantee positive side lengths) and m ≥ n {\displaystyle m\geq n} (for uniqueness):
where s is the semiperimeter, A is the area, and r is the inradius.
The resulting Heronian triangle is not always primitive, and a scaling may be needed for getting the corresponding primitive triangle. For example, taking m = 36 , n = 4 and k = 3 produces a triangle with a = 5220 , b = 900 and c = 5400 , which is similar to the (5, 29, 30) Heronian triangle with a proportionality factor of 180 .
The fact that the generated triangle is not primitive is an obstacle for using this parametrization for generating all Heronian triangles with size lengths less than a given bound, since the size of gcd ( a , b , c ) {\displaystyle \gcd(a,b,c)} cannot be predicted. [ 20 ]
The following method of generating all Heronian triangles was discovered by Leonhard Euler , [ 21 ] who was the first to provably parametrize all such triangles.
For four positive integers m coprime to n and p coprime to q ( gcd ( m , n ) = gcd ( p , q ) = 1 {\displaystyle \gcd {(m,n)}=\gcd {(p,q)}=1} ) satisfying m p > n q {\displaystyle mp>nq} (to guarantee positive side lengths):
where s is the semiperimeter, A is the area, and r is the inradius.
Even when m , n , p , and q are pairwise coprime, the resulting Heronian triangle may not be primitive. In particular, if m , n , p , and q are all odd, the three side lengths are even. It is also possible that a , b , and c have a common divisor other than 2 . For example, with m = 2 , n = 1 , p = 7 , and q = 4 , one gets ( a , b , c ) = (130, 140, 150) , where each side length is a multiple of 10 ; the corresponding primitive triple is (13, 14, 15) , which can also be obtained by dividing the triple resulting from m = 2, n = 1, p = 3, q = 2 by two, then exchanging b and c .
Let a , b , c > 0 {\displaystyle a,b,c>0} be the side lengths of any triangle, let α , β , γ {\displaystyle \alpha ,\beta ,\gamma } be the interior angles opposite these sides, and let t = tan α 2 , {\textstyle t=\tan {\frac {\alpha }{2}},} u = tan β 2 , {\textstyle u=\tan {\frac {\beta }{2}},} and v = tan γ 2 {\textstyle v=\tan {\frac {\gamma }{2}}} be the half-angle tangents. The values t , u , v {\displaystyle t,u,v} are all positive and satisfy t u + u v + v t = 1 {\displaystyle tu+uv+vt=1} ; this "triple tangent identity" is the half-angle tangent version of the fundamental triangle identity written as α 2 + β 2 + γ 2 = π 2 {\textstyle {\frac {\alpha }{2}}+{\frac {\beta }{2}}+{\frac {\gamma }{2}}={\frac {\pi }{2}}} radians (that is, 90°), as can be proved using the addition formula for tangents . By the laws of sines and cosines , all of the sines and the cosines of α , β , γ {\displaystyle \alpha ,\beta ,\gamma } are rational numbers if the triangle is a rational Heronian triangle and, because a half-angle tangent is a rational function of the sine and cosine , it follows that the half-angle tangents are also rational.
Conversely, if t , u , v {\displaystyle t,u,v} are positive rational numbers such that t u + u v + v t = 1 , {\displaystyle tu+uv+vt=1,} it can be seen that they are the half-angle tangents of the interior angles of a class of similar Heronian triangles. [ 22 ] The condition t u + u v + v t = 1 {\displaystyle tu+uv+vt=1} can be rearranged to v = 1 − t u t + u , {\textstyle v={\frac {1-tu}{t+u}},} and the restriction v > 0 {\displaystyle v>0} requires t u < 1. {\displaystyle tu<1.} Thus there is a bijection between the similarity classes of rational Heronian triangles and the pairs of positive rational numbers ( t , u ) {\displaystyle (t,u)} whose product is less than 1 .
To make this bijection explicit, one can choose, as a specific member of the similarity class, the triangle inscribed in a unit-diameter circle with side lengths equal to the sines of the opposite angles: [ 23 ]
where s = 1 2 ( a + b + c ) {\displaystyle s={\tfrac {1}{2}}(a+b+c)} is the semiperimeter, A = 1 2 a b sin γ {\displaystyle A={\tfrac {1}{2}}ab\sin \gamma } is the area, r = ( s − a ) ( s − b ) ( s − c ) s {\displaystyle r={\sqrt {\tfrac {(s-a)(s-b)(s-c)}{s}}}} is the inradius, and all these values are rational because t {\displaystyle t} and u {\displaystyle u} are rational.
To obtain an (integral) Heronian triangle, the denominators of a , b , and c must be cleared . There are several ways to do this. If t = m / n {\displaystyle t=m/n} and u = p / q , {\displaystyle u=p/q,} with gcd ( m , n ) = gcd ( p , q ) = 1 {\displaystyle \gcd(m,n)=\gcd(p,q)=1} ( irreducible fractions ), and the triangle is scaled up by 1 2 ( m 2 + n 2 ) ( p 2 + q 2 ) , {\displaystyle {\tfrac {1}{2}}(m^{2}+n^{2})(p^{2}+q^{2}),} the result is Euler's parametrization. If t = m / k {\displaystyle t=m/k} and u = n / k {\displaystyle u=n/k} with gcd ( m , n , k ) = 1 {\displaystyle \gcd(m,n,k)=1} (lowest common denominator), and the triangle is scaled up by ( k 2 + m 2 ) ( k 2 + n 2 ) / 2 k , {\displaystyle (k^{2}+m^{2})(k^{2}+n^{2})/2k,} the result is similar but not quite identical to Brahmagupta's parametrization. If, instead, this is 1 / t {\displaystyle 1/t} and 1 / u {\displaystyle 1/u} that are reduced to the lowest common denominator, that is, if t = k / m {\displaystyle t=k/m} and u = k / n {\displaystyle u=k/n} with gcd ( m , n , k ) = 1 , {\displaystyle \gcd(m,n,k)=1,} then one gets exactly Brahmagupta's parametrization by scaling up the triangle by ( k 2 + m 2 ) ( k 2 + n 2 ) / 2 k . {\displaystyle (k^{2}+m^{2})(k^{2}+n^{2})/2k.}
This proves that either parametrization generates all Heronian triangles.
The values of t , u and v that give the set of triangles that are geometrically similar to the triangle with side lengths a , b , and c , semiperimeter s = 1 2 ( a + b + c ) {\displaystyle s={\tfrac {1}{2}}(a+b+c)} , and area A are ( t , u , v ) = ( A s ( s − a ) , A s ( s − b ) , A s ( s − c ) ) . {\displaystyle (t,u,v)=\left({\frac {A}{s(s-a)}},{\frac {A}{s(s-b)}},{\frac {A}{s(s-c)}}\right)\,.}
Kurz (2008) has derived fast algorithms for generating Heronian triangles.
There are infinitely many primitive and indecomposable non-Pythagorean Heronian triangles with integer values for the inradius r {\displaystyle r} and all three of the exradii ( r a , r b , r c ) {\displaystyle (r_{a},r_{b},r_{c})} , including the ones generated by [ 24 ] : Thm. 4
There are infinitely many Heronian triangles that can be placed on a lattice such that not only are the vertices at lattice points, as holds for all Heronian triangles, but additionally the centers of the incircle and excircles are at lattice points. [ 24 ] : Thm. 5
See also Integer triangle § Heronian triangles for parametrizations of some types of Heronian triangles.
The list of primitive integer Heronian triangles, sorted by area and, if this is the same,
by perimeter , starts as in the following table.
"Primitive" means that
the greatest common divisor of the three side lengths equals 1.
The list of primitive Heronian triangles whose sides do not exceed 6,000,000 has been computed by Kurz (2008) .
As of February 2021, only two primitive Heronian triangles with perfect square sides are known:
(1853 2 , 4380 2 , 4427 2 , Area= 32 918 611 718 880 ), published in 2013. [ 25 ]
(11789 2 , 68104 2 , 68595 2 , Area= 284 239 560 530 875 680 ), published in 2018. [ 26 ]
Heronian triangles with perfect square sides are connected to the Perfect cuboid problem. The existence of a solution to the Perfect cuboid problem is equivalent to the existence of a solution to the Perfect square triangle problem: [ 27 ] "Does there exist a triangle whose side lengths are perfect squares and whose angle bisectors are integers?".
A shape is called equable if its area equals its perimeter. There are exactly five equable Heronian triangles: the ones with side lengths (5,12,13), (6,8,10), (6,25,29), (7,15,20), and (9,10,17), [ 28 ] [ 29 ] though only four of them are primitive.
Since the area of an equilateral triangle with rational sides is an irrational number , no equilateral triangle is Heronian. However, a sequence of isosceles Heronian triangles that are "almost equilateral" can be developed from the duplication of right-angled triangles , in which the hypotenuse is almost twice as long as one of the legs. The first few examples of these almost-equilateral triangles are listed in the following table (sequence A102341 in the OEIS ):
There is a unique sequence of Heronian triangles that are "almost equilateral" because the three sides are of the form n − 1 , n , n + 1 . A method for generating all solutions to this problem based on continued fractions was described in 1864 by Edward Sang , [ 30 ] and in 1880 Reinhold Hoppe gave a closed-form expression for the solutions. [ 31 ] The first few examples of these almost-equilateral triangles are listed in the following table (sequence A003500 in the OEIS ):
Subsequent values of n can be found by multiplying the previous value by 4, then subtracting the value prior to that one ( 52 = 4 × 14 − 4 , 194 = 4 × 52 − 14 , etc.), thus:
where t denotes any row in the table. This is a Lucas sequence . Alternatively, the formula ( 2 + 3 ) t + ( 2 − 3 ) t {\displaystyle (2+{\sqrt {3}})^{t}+(2-{\sqrt {3}})^{t}} generates all n for positive integers t . Equivalently, let A = area and y = inradius , then,
where { n , y } are solutions to n 2 − 12 y 2 = 4 . A small transformation n = 2 x yields a conventional Pell equation x 2 − 3 y 2 = 1 , the solutions of which can then be derived from the regular continued fraction expansion for √ 3 . [ 32 ]
The variable n is of the form n = 2 + 2 k {\displaystyle n={\sqrt {2+2k}}} , where k is 7, 97, 1351, 18817, .... The numbers in this sequence have the property that k consecutive integers have integral standard deviation . [ 33 ] | https://en.wikipedia.org/wiki/Heronian_triangle |
Herpetology (from Ancient Greek ἑρπετόν herpetón , meaning " reptile " or "creeping animal") is a branch of zoology concerned with the study of amphibians (including frogs , salamanders , and caecilians (Gymnophiona)) and reptiles (including snakes , lizards , turtles , crocodilians , and tuataras ). [ 1 ] [ 2 ] Birds , which are cladistically included within Reptilia, are traditionally excluded here; the separate scientific study of birds is the subject of ornithology . [ 3 ]
The precise definition of herpetology is the study of ectothermic (cold-blooded) tetrapods . This definition of "herps" (otherwise called "herptiles" or "herpetofauna") excludes fish ; however, it is not uncommon for herpetological and ichthyological scientific societies to collaborate. For instance, groups such as the American Society of Ichthyologists and Herpetologists have co-published journals and hosted conferences to foster the exchange of ideas between the fields. [ 4 ] Herpetological societies are formed to promote interest in reptiles and amphibians, both captive and wild.
Herpetological studies can offer benefits relevant to other fields by providing research on the role of amphibians and reptiles in global ecology . For example, by monitoring amphibians that are very sensitive to environmental changes, herpetologists record visible warnings that significant climate changes are taking place. [ 5 ] [ 6 ] Although they can be deadly, some toxins and venoms produced by reptiles and amphibians are useful in human medicine . Currently, some snake venom has been used to create anti-coagulants that work to treat strokes and heart attacks . [ 7 ]
The word herpetology is from the Ancient Greek words ἑρπετόν ( herpetón ), meaning "creeping animal", and λόγος ( lógos ), meaning "study". [ 8 ]
"Herp" is a vernacular term for non-avian reptiles and amphibians. It is derived from the archaic term "herpetile", with roots back to Linnaeus's classification of animals, in which he grouped reptiles and amphibians in the same class. There are over 6700 species of amphibians [ 9 ] and over 9000 species of reptiles. [ 10 ] Despite its modern taxonomic irrelevance, the term has persisted, particularly in the names of herpetology, the scientific study of non-avian reptiles and amphibians, and herpetoculture , the captive care and breeding of reptiles and amphibians.
The field of herpetology can be divided into areas dealing with particular taxonomic groups such as frogs and other amphibians ( batrachology ), [ 11 ] [ 12 ] snakes (ophiology or ophidiology), lizards (saurology) and turtles (cheloniology, chelonology, or testudinology). [ 13 ] [ 14 ]
More generally, herpetologists work on functional problems in the ecology , evolution , physiology , behavior , taxonomy, or molecular biology of amphibians and reptiles. Amphibians or reptiles can be used as model organisms for specific questions in these fields, such as the role of frogs in the ecology of a wetland . All of these areas are related through their evolutionary history, an example being the evolution of viviparity (including behavior and reproduction ). [ 15 ]
Career options in the field of herpetology include lab research , field studies and surveys, assistance in veterinary and medical procedures, zoological staff, museum staff, and college teaching. [ 16 ]
In modern academic science, it is rare for an individual to solely consider themselves to be a herpetologist. Most individuals focus on a particular field such as ecology, evolution, taxonomy, physiology, or molecular biology, and within that field ask questions pertaining to or best answered by examining reptiles and amphibians. For example, an evolutionary biologist who is also a herpetologist may choose to work on an issue such as the evolution of warning coloration in coral snakes . [ 17 ]
Modern herpetological writers include Mark O'Shea [ 18 ] and Philip Purser. Modern herpetological showmen include Jeff Corwin , Steve Irwin (popularly known as the "Crocodile Hunter"), and Austin Stevens , popularly known as "Austin Snakeman" in the TV series Austin Stevens: Snakemaster .
Herpetology is an established hobby around the world due to the varied biodiversity in many environments. Many amateur herpetologists coin themselves as "herpers". [ 19 ]
Most colleges or universities do not offer a major in herpetology at the undergraduate or the graduate level . Instead, persons interested in herpetology select a major in the biological sciences . The knowledge learned about all aspects of the biology of animals is then applied to an individual study of herpetology. [ 20 ]
Herpetology research is published in academic journals including Ichthyology & Herpetology , founded in 1913 [ 21 ] (under the name Copeia in honour of Edward Drinker Cope ); Herpetologica , founded in 1936; [ 22 ] Reptiles and amphibians , founded in 1990; [ 23 ] and Contemporary Herpetology, founded in 1997 and stopped publishing in 2009. [ 24 ] | https://en.wikipedia.org/wiki/Herpetology |
A herpolhode is the curve traced out by the endpoint of the angular velocity vector ω of a rigid rotor , a rotating rigid body . The endpoint of the angular velocity moves in a plane in absolute space, called the invariable plane, that is orthogonal to the angular momentum vector L . The fact that the herpolhode is a curve in the invariable plane appears as part of Poinsot's construction .
The trajectory of the angular velocity around the angular momentum in the invariable plane is a circle in the case of a symmetric top , but in the general case wiggles inside an annulus, while still being concave towards the angular momentum.
H. Goldstein, Classical Mechanics , Addison-Wesley (1950), p. 159 ff.
V. I. Arnold, Mathematical Methods of Classical Mechanics , Second edition, Springer (1989), p. 146.
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Herpolhode |
The Herschel Space Observatory was a space observatory built and operated by the European Space Agency (ESA). It was active from 2009 to 2013, and was the largest infrared telescope ever launched until the launch of the James Webb Space Telescope in 2021. [ 5 ] Herschel carries a 3.5-metre (11.5 ft) mirror [ 5 ] [ 6 ] [ 7 ] [ 8 ] and instruments sensitive to the far infrared and submillimetre wavebands (55–672 μm). Herschel was the fourth and final cornerstone mission in the Horizon 2000 programme, following SOHO / Cluster II , XMM-Newton and Rosetta .
The observatory was carried into orbit by an Ariane 5 in May 2009, reaching the second Lagrangian point (L2) of the Earth–Sun system , 1,500,000 kilometres (930,000 mi) from Earth, about two months later. Herschel is named after Sir William Herschel , the discoverer of the infrared spectrum and planet Uranus , and his sister and collaborator Caroline Herschel . [ 9 ]
The observatory was capable of seeing the coldest and dustiest objects in space; for example, cool cocoons where stars form and dusty galaxies just starting to bulk up with new stars. [ 10 ] The observatory sifted through star-forming clouds—the "slow cookers" of star ingredients—to trace the path by which potentially life-forming molecules, such as water, form.
The telescope's lifespan was governed by the amount of coolant available for its instruments; when that coolant ran out, the instruments would stop functioning correctly. At the time of its launch, operations were estimated to last 3.5 years (to around the end of 2012). [ 11 ] It continued to operate until 29 April 2013 15:20 UTC, when Herschel ran out of coolant. [ 12 ]
NASA was a partner in the Herschel mission, with US participants contributing to the mission; providing mission-enabling instrument technology and sponsoring the NASA Herschel Science Center (NHSC) at the Infrared Processing and Analysis Center and the Herschel Data Search at the Infrared Science Archive . [ 13 ]
In 1982 the Far Infrared and Sub-millimetre Telescope ( FIRST ) was proposed to ESA . The ESA long-term policy-plan "Horizon 2000", produced in 1984, called for a High Throughput Heterodyne Spectroscopy mission as one of its cornerstone missions. In 1986, FIRST was adopted as this cornerstone mission. [ 14 ] It was selected for implementation in 1993, following an industrial study in 1992–1993. The mission concept was redesigned from Earth-orbit to the Lagrangian point L2, in light of experience gained from the Infrared Space Observatory [(2.5–240 μm) 1995–1998]. In 2000, FIRST was renamed Herschel. After being put out to tender in 2000, industrial activities began in 2001. [ 15 ] Herschel was launched in 2009.
The Herschel mission cost €1,100 million . [ 16 ] This figure includes spacecraft and payload, launch and mission expenses, and science operations. [ 17 ]
Herschel specialised in collecting light from objects in the Solar System as well as the Milky Way and even extragalactic objects billions of light-years away, such as newborn galaxies , and was charged with four primary areas of investigation: [ 18 ]
During the mission, Herschel "made over 35,000 scientific observations" and "amass[ed] more than 25,000 hours' worth of science data from about 600 different observing programs". [ 19 ]
The mission involved the first space observatory to cover the full far infrared and submillimetre waveband. [ 18 ] At 3.5 metres wide (11 ft), Herschel carried the largest optical telescope ever deployed in space. [ 20 ] It was made not from glass but from sintered silicon carbide . The mirror's blank was manufactured by Boostec in Tarbes , France ; ground and polished by Opteon Ltd. in Tuorla Observatory , Finland ; and coated by vacuum deposition at the Calar Alto Observatory in Spain . [ 21 ]
The light reflected by the mirror was focused onto three instruments, whose detectors were kept at temperatures below 2 K (−271 °C). [ 22 ] The instruments were cooled with over 2,300 litres (510 imp gal; 610 US gal) of liquid helium , boiling away in a near vacuum at a temperature of approximately 1.4 K (−272 °C). The supply of helium on board the spacecraft was a fundamental limit to the operational lifetime of the space observatory; [ 8 ] it was originally expected to be operational for at least three years. [ 23 ]
Herschel carried three detectors: [ 24 ]
NASA developed and built the mixers, local oscillator chains and power amplifiers for this instrument. [ 30 ] The NASA Herschel Science Center , part of the Infrared Processing and Analysis Center at the California Institute of Technology, also in Pasadena, has contributed science planning and data analysis software. [ 31 ]
A common service module (SVM) was designed and built by Thales Alenia Space in its Turin plant for the Herschel and Planck missions, as they were combined into one single program. [ 32 ]
Structurally, the Herschel and Planck SVMs are very similar. Both SVMs are of octagonal shape and, for both, each panel is dedicated to accommodate a designated set of warm units, while taking into account the heat dissipation requirements of the different warm units, of the instruments, as well as the spacecraft.
Furthermore, on both spacecraft a common design has been achieved for the avionics systems, attitude control and measurement systems (ACMS), command and data management systems (CDMS), power subsystems and the tracking, telemetry, and command subsystem (TT&C).
All spacecraft units on the SVM are redundant.
On each spacecraft, the power subsystem consists of the solar array , employing triple-junction solar cells , a battery and the power control unit (PCU). It is designed to interface with the 30 sections of each solar array, provide a regulated 28 V bus, distribute this power via protected outputs and to handle the battery charging and discharging.
For Herschel, the solar array is fixed on the bottom part of the baffle designed to protect the cryostat from the Sun. The three-axis attitude control system maintains this baffle in direction of the Sun. The top part of this baffle is covered with optical solar reflector (OSR) mirrors reflecting 98% of the Sun's energy , avoiding heating of the cryostat.
This function is performed by the attitude control computer (ACC) which is the platform for the ACMS. It is designed to fulfil the pointing and slewing requirements of the Herschel and Planck payload.
The Herschel spacecraft is three-axis stabilized . The absolute pointing error needs to be less than 3.7 arc seconds.
The main sensor of the line of sight in both spacecraft is the star tracker .
The spacecraft, built in the Cannes Mandelieu Space Center , under Thales Alenia Space Contractorship, was successfully launched from the Guiana Space Centre in French Guiana at 13:12:02 UTC on 14 May 2009, aboard an Ariane 5 rocket, along with the Planck spacecraft , and placed on a very elliptical orbit on its way towards the second Lagrangian point . [ 33 ] [ 34 ] [ 35 ] The orbit's perigee was 270.0 km (intended 270.0 ± 4.5 ), apogee 1,197,080 km (intended 1 193 622 ± 151 800 ), inclination 5.99 deg (intended 6.00 ± 0.06 ). [ 36 ]
On 14 June 2009, ESA successfully sent the command for the cryocover to open which allowed the PACS system to see the sky and transmit images in a few weeks. The lid had to remain closed until the telescope was well into space to prevent contamination. [ 37 ]
Five days later the first set of test photos, depicting M51 Group , was published by ESA. [ 38 ]
In mid-July 2009, approximately sixty days after launch, it entered a halo orbit of 800,000 km average radius around the second Lagrangian point (L2) of the Earth-Sun system , 1.5 million kilometres from the Earth. [ 35 ] [ 39 ]
On 21 July 2009, Herschel commissioning was declared successful, allowing the start of the operational phase. A formal handover of the overall responsibility of Herschel was declared from the programme manager Thomas Passvogel to the mission manager Johannes Riedinger. [ 35 ]
Herschel was instrumental in the discovery of an unknown and unexpected step in the star forming process. The initial confirmation and later verification via help from ground-based telescopes of a vast hole of empty space, previously believed to be a dark nebula , in the area of NGC 1999 shed new light in the way newly forming star regions discard the material which surround them. [ 40 ]
In July 2010 a special issue of Astronomy and Astrophysics was published with 152 papers on initial results from the observatory. [ 41 ]
A second special issue of Astronomy and Astrophysics was published in October 2010 concerning the sole HIFI instrument, due its technical failure which took it down over 6 months between August 2009 and February 2010. [ 42 ]
It was reported on 1 August 2011, that molecular oxygen had been definitively confirmed in space with the Herschel Space Telescope, the second time scientists have found the molecule in space. It had been previously reported by the Odin team. [ 43 ] [ 44 ]
An October 2011 report published in Nature states that Herschel 's measurements of deuterium levels in the comet Hartley 2 suggests that much of Earth's water could have initially come from cometary impacts. [ 45 ] On 20 October 2011, it was reported that oceans-worth of cold water vapor had been discovered in the accretion disc of a young star. Unlike warm water vapor, previously detected near forming stars, cold water vapor would be capable of forming comets which then could bring water to inner planets, as is theorized for the origin of water on Earth . [ 46 ]
On 18 April 2013, the Herschel team announced in another Nature paper that it had located an exceptional starburst galaxy which produced over 2,000 solar masses of stars a year. The galaxy, termed HFLS3 , is located at z = 6.34, originating only 880 million years after the Big Bang . [ 47 ]
Just days before the end of its mission, ESA announced that Herschel 's observations had led to the conclusion that water on Jupiter had been delivered as a result of the collision of Comet Shoemaker–Levy 9 in 1994. [ 48 ]
On 22 January 2014, ESA scientists using Herschel data reported the detection, for the first definitive time, of water vapor on the dwarf planet , Ceres , largest object in the asteroid belt . [ 49 ] [ 50 ] The finding is unexpected because comets , not asteroids , are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." [ 50 ]
On 29 April 2013, ESA announced that Herschel 's supply of liquid helium , used to cool the instruments and detectors on board, had been depleted, thus ending its mission. [ 12 ] At the time of the announcement, Herschel was approximately 1.5 million km from Earth. Because Herschel 's orbit at the L2 point is unstable, ESA wanted to guide the craft on a known trajectory. ESA managers considered two options:
The managers chose the first option because it was less costly. [ 52 ]
On 17 June 2013, Herschel was fully deactivated, with its fuel tanks forcibly depleted and the onboard computer programmed to cease communications with Earth. The final command, which severed communications, was sent from European Space Operations Centre (ESOC) at 12:25 UTC. [ 3 ]
The mission's post-operations phase continued until 2017. The main tasks were consolidation and refinement of instrument calibration, to improve data quality, and data processing, to create a body of scientifically validated data. [ 53 ]
Following Herschel 's demise, some European astronomers have pushed for the joint European-Japanese SPICA far-infrared observatory project, as well as ESA's continued partnership in NASA's James Webb Space Telescope . [ 12 ] [ 54 ] James Webb covers the near-infrared spectrum from 0.6 to 28.5 μm, and SPICA covers the mid-to-far-infrared spectral range between 12 and 230 μm. While Herschel 's dependence on liquid helium coolant limited the design life to around three years, SPICA would have used mechanical Joule-Thomson coolers to sustain cryogenic temperatures for a longer period of time. SPICA's sensitivity was to be two orders of magnitude higher than Herschel. [ 55 ]
NASA's proposed Origins Space Telescope (OST) would also observe in the far-infrared band of light. Europe is leading the study for one of OST's five instruments, the Heterodyne Receiver for OST (HERO). [ 56 ] | https://en.wikipedia.org/wiki/Herschel_Space_Observatory |
A Herschel wedge or Herschel prism is an optical prism used in solar observation to refract most of the light out of the optical path, allowing safe visual observation. It was first proposed and used by astronomer John Herschel in the 1830s.
The prism in a Herschel wedge has a trapezoidal cross section. The surface of the prism facing the light acts as a standard diagonal mirror , reflecting a small portion of the incoming light at 90 degrees into the eyepiece. The trapezoidal prism shape refracts the remainder of the light gathered by the telescope's objective away at an angle. The Herschel wedge reflects about 4.6% of the light that passes through one of the prism faces that is flat to 1/10 of the wavelength of the light. The remaining ~95.4% of the light and heat goes into the prism and exits through the other face and out the back door of the housing; thus, the excess light and heat is disposed of and not used for observing. [ 1 ] While they decrease the intensity of the light, they do not affect the visible spectra, resulting in a more accurate spectral profile, which can be filtered to bring out certain details. They are an alternative to white light filters, which, despite their name, inherently must block certain visible spectra. [ further explanation needed ]
Hershel Wedges present a unique set of hazards and design considerations for the amateur astronomer. Unlike a full aperture ND solar filter, a sub aperture solar filter like a Herschel Wedge allows the full intensity of sunlight to be concentrated by the primary optic.
Secondary optics such as field flatteners, focal reducers, secondary mirrors, and bandpass filters, that are upstream of the Herschel wedge but downstream of the primary optic can overheat and be damaged. Reflectors are extremely dangerous to use with Herschel wedges, since their optical path is poorly contained. While many fear damaging their telescope is the primary reason for avoiding sub-aperture solar filters on reflective telescopes, blinding hazards with reflectors is perhaps even more compelling.
Unlike refractors, whose focal planes lie to the rear of the telescope, reflectors like SCT, Newtonian, RCT, Gregorian, and RASA telescopes have primary mirrors that focus light to a plane in front of the telescope. While some designs use this focal plane as is, others use additional lenses or reflective optics to both correct and move a small portion of this focal plane to a separate area on the telescope. However, it’s important to remember the majority of this focal plane remains in free space, and when it is allowed to focus unfiltered sunlight, like in the case of a Herschel Wedge telescope, it can have disastrous consequences. Looking down the front of a reflecting telescope in direct unfiltered sunlight is no different than staring into the eyepiece of a telescope aimed at the sun without a filter. The large size of reflecting primary mirrors creates the potential for this focal point to burn the inside of a telescope tube or even nearby objects in the vicinity of the telescope.
People who have made a habit of inspecting the inside of their telescope by viewing it from the front or even those who simply want to cap it while its outside may not realize that the same action during the day under sunlight will blind them. Others who use Newtonian telescopes, where a user needs to stand directly above a telescope to enjoy the eyepiece may be burned or blinded by sunlight when slewing on it in the sky.
It is also important to note that even at 4.5%, (~N.D. 1.35 [ 2 ] ) the light from the sun is still strong enough to burn the retina, and so an appropriate neutral density filter must still be used. [ 3 ] | https://en.wikipedia.org/wiki/Herschel_wedge |
Herta Regina Leng (24 February 1903 – 17 July 1997) was an Austrian-American physicist and educator.
Leng was born on 24 February 1903 in Vienna, Austria . She was the daughter of Arthur Leng and Paula Leng, and sister of Leopold Ignaz Leng. Leng fled Austria in 1939 and eventually emigrated to the United States in 1940. She died on 17 July 1997 in Troy, New York .
Dr. Karl Lark-Horovitz , professor of physics at Purdue, had a keen interest in the development of the cyclotron and the application of physical techniques to solve biological problems, and sought to develop methods that utilized radioactive tracers produced from the cyclotron. With the assistance of Leng and Donald Tendam, radioactive tracers were employed following an intense regimen to develop these methods. Key studies concerned sodium and potassium in the human body and their uptake, distribution and excretion; sodium and potassium distribution in human blood cells; and the analysis of enteric coatings for medications. [ 1 ] [ 2 ] [ 3 ] Leng was awarded an American Association of University Women fellowship for work at Purdue. The fellowship permitted her the freedom to pursue the pioneer research on radioactive tracer materials.
In 1943, Leng moved to New York City to accept a faculty appointment in physics at Rensselaer Polytechnic Institute (RPI) and in 1966 was promoted to become RPI's first female full professor. [ 4 ]
Every year, RPI honors Leng with the Herta Leng Memorial Lecture Series. [ 5 ] | https://en.wikipedia.org/wiki/Herta_Regina_Leng |
Hertha Sponer (1 September 1895 – 27 February 1968) was a German physicist and chemist who contributed to modern quantum mechanics and molecular physics and was the first woman on the physics faculty of Duke University . She was the older sister of philologist and resistance fighter Margot Sponer . [ 1 ]
Sponer was born in Neisse (Nysa) , Prussian Silesia , and obtained her high school degree in Neisse. She spent a year at the University of Tübingen , after which she enrolled at the University of Göttingen where she received her PhD in 1920 under the supervision of Peter Debye . During her time at the University of Tübingen , she was an assistant of James Franck . In 1921 she, along with a few others, was among the first women to obtain a PhD in physics in Germany along with the right to teach science at a German university. In October 1925 she received a Rockefeller Foundation fellowship to stay at University of California, Berkeley , where she remained for a year. [ 2 ] During her time at Berkeley, she collaborated with R. T. Birge , developing what is now called the Birge-Sponer method for determining dissociation energies. [ 3 ]
By 1932, Sponer had published around 20 scientific papers in journals such as Nature and Physical Review , and had become an associate professor of physics. In 1933 James Franck resigned and left Göttingen and a year later she was dismissed from her position when Hitler came to power, due to the Nazis' stigma against women in academia. In 1934 Sponer moved to Oslo to teach at the University of Oslo as a visiting professor, and in 1936 she started her appointment at Duke University where she remained as a professor until 1966 when she became professor emeritus, a position she held until her death in 1968. [ 4 ]
During her academic career, Sponer conducted research in quantum mechanics, physics, and chemistry. She authored and published numerous studies, many of which were in collaboration with famous physicists including Edward Teller . She made many contributions to science including the application of quantum mechanics to molecular physics and work on the spectra of near ultra-violet absorption. She set up a spectroscopy lab in the physics department of Duke University, which was later moved to its own new building.
Sponer married James Franck in 1946. She died in Ilten , Lower Saxony . [ 5 ] | https://en.wikipedia.org/wiki/Hertha_Sponer |
Hertwig's rule , or the long axis rule states that a cell divides along its long axis . Introduced by the German zoologist Oscar Hertwig in 1884, the rule emphasizes the cell shape as a default mechanism of spindle apparatus orientation. Hertwig's rule predicts cell division orientation , which is important for tissue architecture, cell fate and morphogenesis .
Hertwig's experiments studied the orientation of frog egg divisions. The frog egg has a round shape and the first division occurs in a random orientation. Hertwig compressed the egg between two parallel plates. The compression forced the egg to change its shape from round to elongated. Hertwig noticed that elongated egg divides not randomly, but orthogonally to its long axis. The new daughter cells were formed along the longest axis of the cell. This observation thus became known as 'Hertwig's rule' or 'long axis rule'. [ 1 ]
Recent studies in animal and plant systems support the 'long axis rule'. The studied systems include the mouse embryo, [ 2 ] Drosophila epithelium , [ 3 ] Xenopus blastomeres (Strauss 2006), MDCK cell monolayers [ 4 ] and plants (Gibson et al., 2011). The mechanism of the 'long axis rule' relies on interphase cell long axis sensing. However, during division many animal cell types undergo cell rounding, causing the long axis to disappear as the cell becomes round. It is at this rounding stage that the decision on the orientation of the cell division is made by the spindle apparatus . The spindle apparatus rotates in the round cell and after several minutes the spindle position is stabilised preferentially along the interphase cell long axis. The cell then divides along the spindle apparatus orientation. The first insights into how cells could remember their long axis came from studies on the Drosophila epithelium. The study indicated the participation of tricellular junctions (TCJs) in determining the spindle orientation. TCJs localized at the regions where three or more cells meet. As cells round up during mitosis, TCJs serve as spatial landmarks. The orientation of TCJs remains stable, independent of the shape changes associated with cell rounding. The positions of TCJs encode information about interphase cell shape anisotropy to orient division in the rounded mitotic cell. [ 3 ] However this study is limited to only one type of epithelia in Drosophila melanogaster and has not been shown to be true in other epithelial types.
It has been shown that mechanical force can cause cells to divide against their long axis and instead with the direction of mechanical stretch in MDCK monolayers. [ 5 ]
Cell divisions along 'long axis' are proposed to be implicated in the morphogenesis, tissue response to stresses and tissue architecture.
Division along the long cell axis reduces global tissue stress more rapidly than random divisions or divisions along the axis of mechanical stress. Long-axis division contributes to the formation of isotropic cell shapes within the monolayer. | https://en.wikipedia.org/wiki/Hertwig_rule |
The principle of least constraint is one variational formulation of classical mechanics enunciated by Carl Friedrich Gauss in 1829, equivalent to all other formulations of analytical mechanics . Intuitively, it says that the acceleration of a constrained physical system will be as similar as possible to that of the corresponding unconstrained system. [ 1 ]
The principle of least constraint is a least squares principle stating that the true accelerations of a mechanical system of n {\displaystyle n} masses is the minimum of the quantity
where the j th particle has mass m j {\displaystyle m_{j}} , position vector r j {\displaystyle \mathbf {r} _{j}} , and applied non-constraint force F j {\displaystyle \mathbf {F} _{j}} acting on the mass.
The notation r ˙ {\displaystyle {\dot {\mathbf {r} }}} indicates time derivative of a vector function r ( t ) {\displaystyle \mathbf {r} (t)} , i.e. position. The corresponding accelerations r ¨ j {\displaystyle {\ddot {\mathbf {r} }}_{j}} satisfy the imposed constraints, which in general depends on the current state of the system, { r j ( t ) , r ˙ j ( t ) } {\displaystyle \{\mathbf {r} _{j}(t),{\dot {\mathbf {r} }}_{j}(t)\}} .
It is recalled the fact that due to active F j {\displaystyle \mathbf {F} _{j}} and reactive (constraint) F c j {\displaystyle \mathbf {F_{c}} _{j}} forces being applied, with resultant R = ∑ j = 1 n F j + F c j {\textstyle \mathbf {R} =\sum _{j=1}^{n}\mathbf {F} _{j}+\mathbf {F_{c}} _{j}} , a system will experience an acceleration r ¨ = ∑ j = 1 n F j m j + F c j m j = ∑ j = 1 n a j + a c j {\textstyle {\ddot {\mathbf {r} }}=\sum _{j=1}^{n}{\frac {\mathbf {F} _{j}}{m_{j}}}+{\frac {\mathbf {F_{c}} _{j}}{m_{j}}}=\sum _{j=1}^{n}\mathbf {a} _{j}+\mathbf {a_{c}} _{j}} .
Gauss's principle is equivalent to D'Alembert's principle .
The principle of least constraint is qualitatively similar to Hamilton's principle , which states that the true path taken by a mechanical system is an extremum of the action . However, Gauss's principle is a true (local) minimal principle, whereas the other is an extremal principle.
Hertz's principle of least curvature is a special case of Gauss's principle, restricted by the three conditions that there are no externally applied forces, no interactions (which can usually be expressed as a potential energy ), and all masses are equal. Without loss of generality, the masses may be set equal to one. Under these conditions, Gauss's minimized quantity can be written
The kinetic energy T {\displaystyle T} is also conserved under these conditions
Since the line element d s 2 {\displaystyle ds^{2}} in the 3 N {\displaystyle 3N} -dimensional space of the coordinates is defined
the conservation of energy may also be written
Dividing Z {\displaystyle Z} by 2 T {\displaystyle 2T} yields another minimal quantity
Since K {\displaystyle {\sqrt {K}}} is the local curvature of the trajectory in the 3 n {\displaystyle 3n} -dimensional space of the coordinates, minimization of K {\displaystyle K} is equivalent to finding the trajectory of least curvature (a geodesic ) that is consistent with the constraints.
Hertz's principle is also a special case of Jacobi 's formulation of the least-action principle .
Hertz designed the principle to eliminate the concept of force and dynamics, so that physics would consist exclusively of kinematics, of material points in constrained motion. He was critical of the "logical obscurity" surrounding the idea of force.
I would mention the experience that it is exceedingly difficult to expound to thoughtful hearers that very introduction to mechanics without being occasionally embarrassed, without feeling tempted now and again to apologize, without wishing to get as quickly as possible over the rudiments, and on to examples which speak for themselves. I fancy that Newton himself must have felt this embarrassment...
To replace the concept of force, he proposed that the acceleration of visible masses are to be accounted for, not by force, but by geometric constraints on the visible masses, and their geometric linkages to invisible masses. In this, he understood himself as continuing the tradition of Cartesian mechanical philosophy , such as Boltzmann 's explaining of heat by atomic motion, and Maxwell's explaining of electromagnetism by ether motion. Even though both atoms and the ether were not observable except via their effects, they were successful in explaining apparently non-mechanical phenomena mechanically. In trying to explain away "mechanical force", Hertz was "mechanizing classical mechanics". [ 2 ] | https://en.wikipedia.org/wiki/Hertz's_principle_of_least_curvature |
Chen–Ho encoding is a memory-efficient alternate system of binary encoding for decimal digits.
The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), [ 1 ] even when using packed BCD .
The encoding reduces the storage requirements of two decimal digits (100 states) from 8 to 7 bits, and those of three decimal digits (1000 states) from 12 to 10 bits using only simple Boolean transformations avoiding any complex arithmetic operations like a base conversion .
In what appears to have been a multiple discovery , some of the concepts behind what later became known as Chen–Ho encoding were independently developed by Theodore M. Hertz in 1969 [ 2 ] and by Tien Chi Chen ( 陳天機 ) (1928–) [ 3 ] [ 4 ] [ 5 ] [ 6 ] in 1971.
Hertz of Rockwell filed a patent for his encoding in 1969, which was granted in 1971. [ 2 ]
Chen first discussed his ideas with Irving Tze Ho ( 何宜慈 ) (1921–2003) [ 7 ] [ 8 ] [ 9 ] [ 10 ] in 1971. Chen and Ho were both working for IBM at the time, albeit in different locations. [ 11 ] [ 12 ] Chen also consulted with Frank Chin Tung [ 13 ] to verify the results of his theories independently. [ 12 ] IBM filed a patent in their name in 1973, which was granted in 1974. [ 14 ] At least by 1973, Hertz's earlier work must have been known to them, as the patent cites his patent as prior art . [ 14 ]
With input from Joseph D. Rutledge and John C. McPherson, [ 15 ] the final version of the Chen–Ho encoding was circulated inside IBM in 1974 [ 16 ] and published in 1975 in the journal Communications of the ACM . [ 15 ] [ 17 ] This version included several refinements, primarily related to the application of the encoding system. It constitutes a Huffman -like prefix code .
The encoding was referred to as Chen and Ho's scheme in 1975, [ 18 ] Chen's encoding in 1982 [ 19 ] and became known as Chen–Ho encoding or Chen–Ho algorithm since 2000. [ 17 ] After having filed a patent for it in 2001, [ 20 ] Michael F. Cowlishaw published a further refinement of Chen–Ho encoding known as densely packed decimal (DPD) encoding in IEE Proceedings – Computers and Digital Techniques in 2002. [ 21 ] [ 22 ] DPD has subsequently been adopted as the decimal encoding used in the IEEE 754-2008 and ISO/IEC/IEEE 60559:2011 floating-point standards.
Chen noted that the digits zero through seven were simply encoded using three binary digits of the corresponding octal group. He also postulated that one could use a flag to identify a different encoding for the digits eight and nine, which would be encoded using a single bit.
In practice, a series of Boolean transformations are applied to the stream of input bits, compressing BCD encoded digits from 12 bits per three digits to 10 bits per three digits. Reversed transformations are used to decode the resulting coded stream to BCD. Equivalent results can also be achieved by the use of a look-up table .
Chen–Ho encoding is limited to encoding sets of three decimal digits into groups of 10 bits (so called declets ). [ 1 ] Of the 1024 states possible by using 10 bits, it leaves only 24 states unused [ 1 ] (with don't care bits typically set to 0 on write and ignored on read). With only 2.34% wastage it gives a 20% more efficient encoding than BCD with one digit in 4 bits. [ 12 ] [ 17 ]
Both, Hertz and Chen also proposed similar, but less efficient, encoding schemes to compress sets of two decimal digits (requiring 8 bits in BCD) into groups of 7 bits. [ 2 ] [ 12 ]
Larger sets of decimal digits could be divided into three- and two-digit groups. [ 2 ]
The patents also discuss the possibility to adapt the scheme to digits encoded in any other decimal codes than 8-4-2-1 BCD , [ 2 ] like f.e. Excess-3 , [ 2 ] Excess-6 , Jump-at-2 , Jump-at-8 , Gray , Glixon , O'Brien type-I and Gray–Stibitz code . [ a ] The same principles could also be applied to other bases.
In 1973, some form of Chen–Ho encoding appears to have been utilized in the address conversion hardware of the optional IBM 7070 / 7074 emulation feature for the IBM System/370 Model 165 and 370 Model 168 computers. [ 23 ] [ 24 ]
One prominent application uses a 128-bit register to store 33 decimal digits with a three digit exponent, effectively not less than what could be achieved using binary encoding (whereas BCD encoding would need 144 bits to store the same number of digits). | https://en.wikipedia.org/wiki/Hertz_encoding |
Hertzbleed is a hardware security attack which describes exploiting dynamic frequency scaling to reveal secret data. The attack is a kind of timing attack , bearing similarity to previous power analysis vulnerabilities. Hertzbleed is more dangerous than power analysis, as it can be exploited by a remote attacker. Disclosure of cryptographic keys is the main concern regarding the exploit but other uses of the attack have been demonstrated since its initial discovery. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
The exploit has been verified to work against Intel and AMD processors, with Intel's security advisory stating that all Intel processors are affected. [ 7 ] Other processors using frequency scaling exist, but the attack has not been tested on them.
Neither Intel nor AMD are planning to release microcode patches, instead advising to harden cryptography libraries against the vulnerability.
Normal timing attacks are mitigated by using constant-time programming, which ensures that each instruction takes equally long, regardless of the input data. Hertzbleed combines a timing attack with a power analysis attack. A power analysis attack measures the power consumption of the CPU to deduce the data being processed. This, however, requires an attacker to be able to measure the power consumption.
Hertzbleed exploits execution time differences caused by dynamic frequency scaling, a CPU feature which changes the processor's frequency to maintain power consumption and temperature constraints. As the processor's frequency depends on the power consumption, which in turn depends on the data, a remote attacker can deduce the data being processed from execution time. Hertzbleed thus effectively bypasses constant-time programming, which does not take into account changes in processor frequency. [ 3 ] | https://en.wikipedia.org/wiki/Hertzbleed |
A Hertzian cone is the cone produced when an object passes through a solid , such as a bullet through glass. More technically, it is a cone of force that propagates through a brittle , amorphous , or cryptocrystalline solid material from a point of impact. This force eventually removes a full or partial cone in the material. [ 1 ] This is the physical principle that explains the form and characteristics of the flakes removed from a core of tool stone during the process of lithic reduction .
This phenomenon is named after the German physicist Heinrich Rudolf Hertz , who first described this type of wave-front propagation through various media.
Although it might not be agreed by all, natural phenomena which have been grouped with the Hertzian cone phenomena include the crescentic " chatter marks " made on smoothed bedrock by glacial ice dragging along boulders at its base, the numerous crescentic impact marks sometimes seen on pebbles and cobbles, and the shatter cones found at bolide impact sites. James Byous, working independently (at privately funded Dowd Research, Savannah, Georgia USA) has made a protracted study of Hertzian cones. Some of his work may be found via sharing points [ 2 ] or directly at Dowd Research. [ 3 ] He has produced a comprehensive glossary on Hertzian fractures and related terms. [ 4 ] A Hertzian cone is often 104 degrees when created by an indenter. Smaller cones may be produced due to lack of size of the material, or irregularities in the structure of the material. However, in ballistics , the faster the projectile the steeper the edges and angle of the cone. [ 5 ] | https://en.wikipedia.org/wiki/Hertzian_cone |
The Hertzsprung–Russell diagram (abbreviated as H–R diagram , HR diagram or HRD ) is a scatter plot of stars showing the relationship between the stars' absolute magnitudes or luminosities and their stellar classifications or effective temperatures . The diagram was created independently in 1911 by Ejnar Hertzsprung and by Henry Norris Russell in 1913, and represented a major step towards an understanding of stellar evolution .
In the nineteenth century large-scale photographic spectroscopic surveys of stars were performed at Harvard College Observatory , producing spectral classifications for tens of thousands of stars, culminating ultimately in the Henry Draper Catalogue . In one segment of this work Antonia Maury included divisions of the stars by the width of their spectral lines . [ 1 ] Hertzsprung noted that stars described with narrow lines tended to have smaller proper motions than the others of the same spectral classification. He took this as an indication of greater luminosity for the narrow-line stars, and computed secular parallaxes for several groups of these, allowing him to estimate their absolute magnitude. [ 2 ]
In 1910 Hans Oswald Rosenberg published a diagram plotting the apparent magnitude of stars in the Pleiades cluster against the strengths of the calcium K line and two hydrogen Balmer lines . [ 3 ] These spectral lines serve as a proxy for the temperature of the star, an early form of spectral classification. The apparent magnitude of stars in the same cluster is equivalent to their absolute magnitude and so this early diagram was effectively a plot of luminosity against temperature. The same type of diagram is still used today as a means of showing the stars in clusters without having to initially know their distance and luminosity. [ 4 ] Hertzsprung had already been working with this type of diagram, but his first publications showing it were not until 1911. This was also the form of the diagram using apparent magnitudes of a cluster of stars all at the same distance. [ 5 ]
Russell's early (1913) versions of the diagram included Maury's giant stars identified by Hertzsprung, those nearby stars with parallaxes measured at the time, stars from the Hyades (a nearby open cluster ), and several moving groups , for which the moving cluster method could be used to derive distances and thereby obtain absolute magnitudes for those stars. [ 6 ]
There are several forms of the Hertzsprung–Russell diagram, and the nomenclature is not very well defined. All forms share the same general layout: stars of greater luminosity are toward the top of the diagram, and stars with higher surface temperature are toward the left side of the diagram.
The original diagram displayed the spectral type of stars on the horizontal axis and the absolute visual magnitude on the vertical axis. The spectral type is not a numerical quantity, but the sequence of spectral types is a monotonic series that reflects the stellar surface temperature. Modern observational versions of the chart replace spectral type by a color index (in diagrams made in the middle of the 20th Century, most often the B-V color ) of the stars. This type of diagram is what is often called an observational Hertzsprung–Russell diagram, or specifically a color–magnitude diagram (CMD), and it is often used by observers. [ 7 ] In cases where the stars are known to be at identical distances such as within a star cluster, a color–magnitude diagram is often used to describe the stars of the cluster with a plot in which the vertical axis is the apparent magnitude of the stars. For cluster members, by assumption there is a single additive constant difference between their apparent and absolute magnitudes, called the distance modulus , for all of that cluster of stars. Early studies of nearby open clusters (like the Hyades and Pleiades ) by Hertzsprung and Rosenberg produced the first CMDs, a few years before Russell's influential synthesis of the diagram collecting data for all stars for which absolute magnitudes could be determined. [ 3 ] [ 5 ]
Another form of the diagram plots the effective surface temperature of the star on one axis and the luminosity of the star on the other, almost invariably in a log-log plot . Theoretical calculations of stellar structure and the evolution of stars produce plots that match those from observations. This type of diagram could be called temperature-luminosity diagram , but this term is hardly ever used; when the distinction is made, this form is called the theoretical Hertzsprung–Russell diagram instead. A peculiar characteristic of this form of the H–R diagram is that the temperatures are plotted from high temperature to low temperature, which aids in comparing this form of the H–R diagram with the observational form.
Although the two types of diagrams are similar, astronomers make a sharp distinction between the two. The reason for this distinction is that the exact transformation from one to the other is not trivial. To go between effective temperature and color requires a color–temperature relation , and constructing that is difficult; it is known to be a function of stellar composition and can be affected by other factors like stellar rotation . When converting luminosity or absolute bolometric magnitude to apparent or absolute visual magnitude, one requires a bolometric correction , which may or may not come from the same source as the color–temperature relation. One also needs to know the distance to the observed objects ( i.e. , the distance modulus) and the effects of interstellar obscuration , both in the color (reddening) and in the apparent magnitude (where the effect is called "extinction"). Color distortion (including reddening) and extinction (obscuration) are also apparent in stars having significant circumstellar dust . The ideal of direct comparison of theoretical predictions of stellar evolution to observations thus has additional uncertainties incurred in the conversions between theoretical quantities and observations.
Most of the stars occupy the region in the diagram along the line called the main sequence . During the stage of their lives in which stars are found on the main sequence line, they are fusing hydrogen in their cores. The next concentration of stars is on the horizontal branch ( helium fusion in the core and hydrogen burning in a shell surrounding the core). Another prominent feature is the Hertzsprung gap located in the region between A5 and G0 spectral type and between +1 and −3 absolute magnitudes (i.e., between the top of the main sequence and the giants in the horizontal branch ). RR Lyrae variable stars can be found in the left of this gap on a section of the diagram called the instability strip . Cepheid variables also fall on the instability strip, at higher luminosities.
The H-R diagram can be used by scientists to roughly measure how far away a star cluster or galaxy is from Earth. This can be done by comparing the apparent magnitudes of the stars in the cluster to the absolute magnitudes of stars with known distances (or of model stars). The observed group is then shifted in the vertical direction, until the two main sequences overlap. The difference in magnitude that was bridged in order to match the two groups is called the distance modulus and is a direct measure for the distance (ignoring extinction ). This technique is known as main sequence fitting and is a type of spectroscopic parallax . Not only the turn-off in the main sequence can be used, but also the tip of the red giant branch stars. [ 8 ] [ 9 ]
ESA's Gaia mission showed several features in the diagram that were either not known or that were suspected to exist. It found a gap in the main sequence that appears for M-dwarfs and that is explained with the transition from a partly convective core to a fully convective core. [ 10 ] [ 11 ] For white dwarfs the diagram shows several features. Two main concentrations appear in this diagram following the cooling sequence of white dwarfs that are explained with the atmospheric composition of white dwarfs, especially hydrogen versus helium dominated atmospheres of white dwarfs. [ 12 ] A third concentration is explained with core crystallization of the white dwarfs interior. This releases energy and delays the cooling of white dwarfs. [ 13 ] [ 14 ]
Contemplation of the diagram led astronomers to speculate that it might demonstrate stellar evolution , the main suggestion being that stars collapsed from red giants to dwarf stars, then moving down along the line of the main sequence in the course of their lifetimes. Stars were thought therefore to radiate energy by converting gravitational energy into radiation through the Kelvin–Helmholtz mechanism . This mechanism resulted in an age for the Sun of only tens of millions of years, creating a conflict over the age of the Solar System between astronomers, and biologists and geologists who had evidence that the Earth was far older than that. This conflict was only resolved in the 1930s when nuclear fusion was identified as the source of stellar energy.
Following Russell's presentation of the diagram to a meeting of the Royal Astronomical Society in 1912, Arthur Eddington was inspired to use it as a basis for developing ideas on stellar physics . In 1926, in his book The Internal Constitution of the Stars he explained the physics of how stars fit on the diagram. [ 15 ] The paper anticipated the later discovery of nuclear fusion and correctly proposed that the star's source of power was the combination of hydrogen into helium, liberating enormous energy. This was a particularly remarkable intuitive leap, since at that time the source of a star's energy was still unknown, thermonuclear energy had not been proven to exist, and even that stars are largely composed of hydrogen (see metallicity ), had not yet been discovered. Eddington managed to sidestep this problem by concentrating on the thermodynamics of radiative transport of energy in stellar interiors. [ 16 ] Eddington predicted that dwarf stars remain in an essentially static position on the main sequence for most of their lives. In the 1930s and 1940s, with an understanding of hydrogen fusion, came an evidence-backed theory of evolution to red giants following which were speculated cases of explosion and implosion of the remnants to white dwarfs. The term supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a pre-supernova star, a concept put forth by Fred Hoyle in 1954. [ 17 ] The pure mathematical quantum mechanics and classical mechanical models of stellar processes enable the Hertzsprung–Russell diagram to be annotated with known conventional paths known as stellar sequences—there continue to be added rarer and more anomalous examples as more stars are analysed and mathematical models considered. | https://en.wikipedia.org/wiki/Hertzsprung–Russell_diagram |
In surface chemistry , the Hertz–Knudsen equation , also known as Knudsen–Langmuir equation describes evaporation rates, named after Heinrich Hertz and Martin Knudsen .
The Hertz–Knudsen equation describes the non- dissociative adsorption of a gas molecule on a surface by expressing the variation of the number of molecules impacting on the surfaces per unit of time as a function of the pressure of the gas and other parameters which characterise both the gas phase molecule and the surface: [ 1 ] [ 2 ]
where:
Since the equation result has the units of s −1 per area, it can be assimilated to a rate constant for the adsorption process. | https://en.wikipedia.org/wiki/Hertz–Knudsen_equation |
The Herz reaction , named after the chemist Richard Herz , is the chemical conversion of an aniline to the benzo dithiazolium salt by its reaction with disulfur dichloride . The salt is called a Herz salt . Hydrolysis of this Herz salt give the corresponding sodium thiolate , which can be further converted to the 2-aminothiophenol. [ 1 ]
The 2-aminothiophenols are suitable for diazotization, giving benzothiadiazoles. [ 2 ] Instead the sodium 2-aminothiophenolate can be converted to a 1,3- benzothiazole .
Aniline 5 is converted to compound 6 , in three steps;
The compound, ( thioindoxyl , 7 ) is an important intermediate in the organic synthesis of some dyes . Condensation with acenaphthoquinone gives 8 , a dye of the so-called Ciba-Scarlet type, while condensation of 7 with isatin results in the thioindigo dye 9 . | https://en.wikipedia.org/wiki/Herz_reaction |
The STAT3-Ser/Hes3 signaling axis is a specific type of intracellular signaling pathway that regulates several fundamental properties of cells.
Cells in tissues need to be able to sense and interpret changes in their environment. For example, cells must be able to detect when they are in physical contact with other cells in order to regulate their growth and avoid the generation of tumors (“ carcinogenesis ”). In order to do so, cells place receptor molecules on their surface, often with a section of the receptor exposed to the outside of the cell (extracellular environment), and a section inside the cell (intracellular environment). These molecules are exposed to the environment outside of the cell and, therefore, in position to sense it. They are called receptors because when these come into contact with particular molecules (termed ligands ), then chemical changes are induced to the receptor. These changes typically involve alterations in the three-dimensional shape of the receptor. These 3D structure changes affect both the extracellular and intracellular parts (domains) of the receptor. As a result, interaction of a receptor with its specific ligand which is located outside of the cell causes changes to the receptor part which is inside the cell. A signal from the extracellular space, therefore, can affect the biochemical state inside the cell.
Following receptor activation by the ligand, several steps can sequentially ensue. For example, the 3D shape changes to the intracellular domain may render it recognizable to catalytic proteins ( enzymes ) that are located inside the cell and have physical access to it. These enzymes may then induce chemical changes to the intracellular domain of the activated receptor, including the addition of phosphate chemical groups to specific components of the receptor ( phosphorylation ), or the physical separation ( cleavage ) of the intracellular domain. Such modifications may enable the intracellular domain to act as an enzyme itself, meaning that it may now catalyze the modification of other proteins in the cell. Enzymes which catalyze phosphorylation modifications are termed kinases . These modified proteins may then also be activated and enabled to induce further modifications to other proteins, and so on. This sequence of catalytic modifications is termed a “ signal transduction pathway ” or “ second messenger cascade ”. It is a critical mechanism employed by cells to sense their environment and induce complex changes to their state. Such changes may include, as noted, chemical modifications to other molecules, as well as decisions concerning which genes are activated and which are not ( transcriptional regulation ).
There are many signal transduction pathways in a cell and each of these involves many different proteins. This provides many opportunities for different signal transduction pathways to intercept (cross-talk). As a result, a cell simultaneously processes and interprets many different signals, as would be expected since the extracellular environment contains many different ligands. Cross-talk also allows the cell to integrate these many signals as opposed to process them independently. For example, mutually opposing signals may be activated at the same time by different ligands, and the cell can interpret these signals as a whole.
Signal transduction pathways are widely studied in biology as they provide mechanistic understanding of how a cell operates and takes critical decisions (e.g. to multiply, move, die, activate genes etc.). These pathways also provide many drug targets and are of great relevance to drug discovery efforts.
The notch / STAT3 -Ser/Hes3 signaling axis is a recently identified signal transduction branch of the notch [ 1 ] signaling pathway, originally shown to regulate the number of neural stem cells in culture and in the living adult brain. [ 2 ] [ 3 ] Pharmacological activation of this pathway opposed the progression of neurodegenerative disease in rodent models. More recent efforts have implicated it in carcinogenesis and diabetes . The pathway can be activated by soluble ligands of the notch receptor which induce the sequential activation of intracellular kinases and the subsequent phosphorylation of STAT3 on the serine residue at amino acid position 727 (STAT3-Ser). This modification is followed by an increase in the levels of Hes3, a transcription factor belonging to the Hes/Hey family of genes (see HES1 ). [ 4 ] Hes3 has been used as a biomarker to identify putative endogenous stem cells in tissues. [ 5 ] The pathway is an example of non-canonical signaling as it represents a new branch of a previously established signaling pathway ( notch ). Several efforts are currently aimed at relating this pathway to other signaling pathways and to manipulate it in a therapeutic context.
In canonical notch signaling, ligand proteins bind to the extracellular domain of the notch receptor and induce the cleavage and release of the intracellular domain into the cytoplasm. This subsequently interacts with other proteins, enters the nucleus, and regulates gene expression. [ 1 ]
In 2006, a non-canonical branch of the notch signaling pathway was discovered. [ 2 ] Using cultures of mouse neural stem cells, notch activation was shown to lead to the phosphorylation of several kinases ( PI3K , Akt , mTOR ) and subsequent phosphorylation of the serine residue of STAT3 in the absence of any detectable phosphorylation of the tyrosine residue of STAT3, a modification that is widely studied in the context of cancer biology. [ 6 ] Following this event, Hes3 mRNA was elevated within 30 minutes. Subsequently, the consequences of this pathway were studied.
Various inputs into this pathway have been identified. Activators include ligands of a number of receptors. Because certain signal transduction pathways oppose the STAT3-Ser/Hes3 signaling axis, blockers ( inhibitors ) of these signal transduction pathways promote the STAT3-Ser/Hes3 signaling axis and, therefore, also act as activators:
The effects of a particular signal transduction pathway can be very different among distinct cell types. For example, the same signal transduction pathway may promote the survival of one cell type but the maturation of another. This depends both on the nature of a cell but also on its particular state which may change over the course of its lifetime. Identifying cell types where a signal transduction pathway is operational is a first step to uncovering potentially new properties of this pathway.
The STAT3-Ser/Hes3 signaling axis has been shown to operate on various cell types. So far, research has mostly focused on stem cells and cancerous tissue and, more recently, in the function of the endocrine pancreas :
An individual signal transduction pathway can regulate several proteins (e.g. kinases) as well as the activation of many genes. The consequences to the properties of the cell can be, therefore, very prominent. Identifying these properties (through theoretical predictions and experimentation) sheds light on the function of the pathway and provides possible new therapeutic targets.
Activation of the notch/STAT3-Ser/Hes3 signaling axis has significant consequences to several cell types; effects have been documented both in vitro and in vivo:
As stated above, the STAT3-Ser/Hes3 signaling axis regulates the number of neural stem cells (as well as other cell types) in culture. This prompted experiments to determine if the same pathway can also regulate the number of naturally resident (endogenous) neural stem cells in the adult rodent brain. If so, this would generate a new experimental approach to study the effects of increasing the number of endogenous neural stem cells (eNSCs). For example, would this lead to the replacement of lost cells by newly generated cells from eNSCs? Or, could this lead to the rescue of damaged neurons in models of neurodegenerative disease, since eNSCs are known to produce factors that can protect injured neurons? [ 21 ]
Various treatments that input into the STAT3-Ser/Hes3 signaling axis (Delta4, Angiopoietin 2, insulin, or a combined treatment consisting of all three factors and an inhibitor of JAK) induce the increase in numbers of endogenous neural stem cells as well as behavioral recovery in models of neurodegenerative disease . Several pieces of evidence suggest that in the adult brain, pharmacological activation of the STAT3-Ser/Hes3 signaling axis protects compromised neurons through increased neurotrophic support provided by activated neural stem cells / neural precursor cells, which can be identified by their expression of Hes3:
The emerging understanding of the role of eNSCs in the adult mammalian brain suggested the relevance of these cells to disease. To address this issue, experiments were performed where the activation of eNSCs was induced in models of disease. This allowed the study of the consequences of activating eNSCs in the diseased brain. Several lines of evidence implicate the STAT3-Ser/Hes3 signaling axis in various diseases:
In tissues, many different cell types interact with one another. In the brain, for example, neurons , astrocytes , and oligodendrocytes (specialized cells of the neural tissue, each with specific functions) interact with one another as well as with cells that comprise blood vessels. All these different cell types may interact with all others by the production of ligands that may activate receptors on the cell surface of other cell types. Understanding the way these different cell types interact with one another will allow to predict ways of activating eNSCs. For example, because eNSCs are found in close proximity with blood vessels, it has been hypothesized that signals (e.g., ligands) from cells comprising the blood vessel act on receptors found on the cell surface of eNSCs.
Endogenous neural stem cells are often in close physical proximity to blood vessels. Signals from blood vessels regulate their interaction with stem cells and contribute to the cytoarchitecture of the tissue. The STAT3-Ser/Hes3 signaling axis operating in Hes3+ cells is a convergence point for several of these signals (e.g. Delta4, Angiopoietin 2). Hes3, in turn, by regulating the expression of Shh and potentially other factors, can also exert an effect on blood vessels and other cells comprising their microenvironment. | https://en.wikipedia.org/wiki/Hes3_signaling_axis |
Hess's law of constant heat summation , also known simply as Hess's law , is a relationship in physical chemistry and thermodynamics [ 1 ] named after Germain Hess , a Swiss -born Russian chemist and physician who published it in 1840. The law states that the total enthalpy change during the complete course of a chemical reaction is independent of the sequence of steps taken. [ 2 ] [ 3 ]
Hess's law is now understood as an expression of the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function ). According to the first law of thermodynamics , the enthalpy change in a system due to a reaction at constant pressure is equal to the heat absorbed (or the negative of the heat released), which can be determined by calorimetry for many reactions. The values are usually stated for reactions with the same initial and final temperatures and pressures (while conditions are allowed to vary during the course of the reactions). Hess's law can be used to determine the overall energy required for a chemical reaction that can be divided into synthetic steps that are individually easier to characterize. This affords the compilation of standard enthalpies of formation , which may be used to predict the enthalpy change in complex synthesis.
Hess's law states that the change of enthalpy in a chemical reaction is the same regardless of whether the reaction takes place in one step or several steps, provided the initial and final states of the reactants and products are the same. Enthalpy is an extensive property , meaning that its value is proportional to the system size. [ 4 ] Because of this, the enthalpy change is proportional to the number of moles participating in a given reaction.
In other words, if a chemical change takes place by several different routes, the overall enthalpy change is the same, regardless of the route by which the chemical change occurs (provided the initial and final condition are the same). If this were not true, then one could violate the first law of thermodynamics .
Hess's law allows the enthalpy change (Δ H ) for a reaction to be calculated even when it cannot be measured directly. This is accomplished by performing basic algebraic operations based on the chemical equations of reactions using previously determined values for the enthalpies of formation.
Combination of chemical equations leads to a net or overall equation. If the enthalpy changes are known for all the equations in the sequence, their sum will be the enthalpy change for the net equation. If the net enthalpy change is negative ( Δ H net < 0 {\displaystyle \Delta H_{\text{net}}<0} ), the reaction is exothermic and is more likely to be spontaneous ; positive Δ H values correspond to endothermic reactions. ( Entropy also plays an important role in determining spontaneity, as some reactions with a positive enthalpy change are nevertheless spontaneous due to an entropy increase in the reaction system.)
Hess's law states that enthalpy changes are additive. Thus the value of the standard enthalpy of reaction can be calculated from standard enthalpies of formation of products and reactants as follows:
Here, the first sum is over all products and the second over all reactants, a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} are the stoichiometric coefficients of products and reactants respectively, Δ f H p r o d u c t s ⊖ {\displaystyle \Delta _{\text{f}}H_{products}^{\ominus }} and Δ f H r e a c t a n t s ⊖ {\displaystyle \Delta _{\text{f}}H_{reactants}^{\ominus }} are the standard enthalpies of formation of products and reactants respectively, and the o superscript indicates standard state values. This may be considered as the sum of two (real or fictitious) reactions:
and Elements → Products
Reaction (a) is the sum of reactions (b) and (c), for which the total Δ H = −393.5 kJ/mol, which is equal to Δ H in (a).
The concepts of Hess's law can be expanded to include changes in entropy and in Gibbs free energy , since these are also state functions . The Bordwell thermodynamic cycle is an example of such an extension that takes advantage of easily measured equilibria and redox potentials to determine experimentally inaccessible Gibbs free energy values. Combining Δ G o values from Bordwell thermodynamic cycles and Δ H o values found with Hess's law can be helpful in determining entropy values that have not been measured directly and therefore need to be calculated through alternative paths.
For the free energy:
For entropy , the situation is a little different. Because entropy can be measured as an absolute value, not relative to those of the elements in their reference states (as with Δ H o and Δ G o ), there is no need to use the entropy of formation; one simply uses the absolute entropies for products and reactants:
Hess's law is useful in the determination of enthalpies of the following: [ 2 ] | https://en.wikipedia.org/wiki/Hess's_law |
In geometry , Hesse's principle of transfer ( German : Übertragungsprinzip ) states that if the points of the projective line P 1 are depicted by a rational normal curve in P n , then the group of the projective transformations of P n that preserve the curve is isomorphic to the group of the projective transformations of P 1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer ). [ 1 ] [ 2 ] It was originally introduced by Otto Hesse in 1866, in a more restricted form. It influenced Felix Klein in the development of the Erlangen program . [ 3 ] [ 4 ] [ 5 ] Since its original conception, it was generalized by many mathematicians, including Klein , Fano , and Cartan . [ 6 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hesse's_principle_of_transfer |
In geometry, Hesse's theorem , named for Otto Hesse , states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pair. A quadrilateral with this property is called a Hesse quadrilateral .
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hesse's_theorem |
Hesseltinella is a genus of fungi belonging to the family Cunninghamellaceae . [ 1 ]
The genus name of Hesseltinella is in honour of Clifford William Hesseltine (1917–1999), who was an American botanist (Mycology), Microbiologist , from the University of Wisconsin . [ 2 ]
The genus was circumscribed by Harbansh Prasad Upadhyay in Persoonia vol.6 (issue 1) on pages 111, 116-117 in 1970.
The genus has cosmopolitan distribution . [ 1 ]
It has one known species; Hesseltinella vesiculosa H.P.Upadhyay [ 1 ] | https://en.wikipedia.org/wiki/Hesseltinella |
In applied mathematics , Hessian automatic differentiation are techniques based on automatic differentiation (AD)
that calculate the second derivative of an n {\displaystyle n} -dimensional function, known as the Hessian matrix .
When examining a function in a neighborhood of a point, one can discard many complicated global aspects of the function and accurately approximate it with simpler functions. The quadratic approximation is the best-fitting quadratic in the neighborhood of a point, and is frequently used in engineering and science. To calculate the quadratic approximation, one must first calculate its gradient and Hessian matrix.
Let f : R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } , for each x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} the Hessian matrix H ( x ) ∈ R n × n {\displaystyle H(x)\in \mathbb {R} ^{n\times n}} is the second order derivative and is a symmetric matrix .
For a given u ∈ R n {\displaystyle u\in \mathbb {R} ^{n}} , this method efficiently calculates the Hessian-vector product H ( x ) u {\displaystyle H(x)u} . Thus can be used to calculate the entire Hessian by calculating H ( x ) e i {\displaystyle H(x)e_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} . [ 1 ]
The method works by first using forward AD to perform f ( x ) → u T ∇ f ( x ) {\displaystyle f(x)\rightarrow u^{T}\nabla f(x)} , subsequently the method then calculates the gradient of u T ∇ f ( x ) {\displaystyle u^{T}\nabla f(x)} using Reverse AD to yield ∇ ( u ⋅ ∇ f ( x ) ) = u T H ( x ) = ( H ( x ) u ) T {\displaystyle \nabla \left(u\cdot \nabla f(x)\right)=u^{T}H(x)=(H(x)u)^{T}} . Both of these two steps come at a time cost proportional to evaluating the function, thus the entire Hessian can be evaluated at a cost proportional to n evaluations of the function.
An algorithm that calculates the entire Hessian with one forward and one reverse sweep of the computational graph is Edge_Pushing. Edge_Pushing is the result of applying the reverse gradient to the computational graph of the gradient. Naturally, this graph has n output nodes, thus in a sense one has to apply the reverse gradient method to each outgoing node. Edge_Pushing does this by taking into account overlapping calculations. [ 2 ]
The algorithm's input is the computational graph of the function. After a preceding forward sweep where all intermediate values in the computational graph are calculated, the algorithm initiates a reverse sweep of the graph. Upon encountering a node that has a corresponding nonlinear elemental function, a new nonlinear edge is created between the node's predecessors indicating there is nonlinear interaction between them. See the example figure on the right. Appended to this nonlinear edge is an edge weight that is the second-order partial derivative of the nonlinear node in relation to its predecessors. This nonlinear edge is subsequently pushed down to further predecessors in such a way that when it reaches the independent nodes, its edge weight is the second-order partial derivative of the two independent nodes it connects. [ 2 ]
The graph colouring techniques explore sparsity patterns of the Hessian matrix and cheap Hessian vector products to obtain the entire matrix. Thus these techniques are suited for large, sparse matrices. The general strategy of any such colouring technique is as follows.
Steps one and two need only be carried out once, and tend to be costly. When one wants to calculate the Hessian at numerous points (such as in an optimization routine), steps 3 and 4 are repeated.
As an example, the figure on the left shows the sparsity pattern of the Hessian matrix where the columns have been appropriately coloured in such a way to allow columns of the same colour to be merged without incurring in a collision between elements.
There are a number of colouring techniques, each with a specific recovery technique. For a comprehensive survey, see. [ 3 ] There have been successful numerical results of such methods. [ 4 ] | https://en.wikipedia.org/wiki/Hessian_automatic_differentiation |
A Hessian crucible is a type of ceramic crucible that was manufactured in the Hesse region of Germany from the late Middle Ages through the Renaissance period. They were renowned for their ability to withstand very high temperatures, rapid changes in temperature, and strong reagents . These crucibles were widely used for alchemy and early metallurgy . Millions of the vessels were exported throughout Europe, Scandinavia, and the colonies in the Americas . The crucibles were made by firing kaolinitic clay at temperatures greater than 1100°C, forming mullite . Mullite is an aluminum silicate only described in the 20th century and is responsible for the excellent properties of the Hessian crucible. [ 1 ] [ 2 ] Main production centre of the Hessian crucibles was the village of Großalmerode .
This history of chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hessian_crucible |
In chemistry , a heteroatom (from Ancient Greek heteros ' different ' and atomos ' uncut ' ) is, strictly, any atom that is not carbon or hydrogen . [ 1 ]
In practice, the term is mainly used more specifically to indicate that non-carbon atoms have replaced carbon in the backbone of the molecular structure . Typical heteroatoms are nitrogen (N), oxygen (O), sulfur (S), phosphorus (P), chlorine (Cl), bromine (Br), and iodine (I), [ 2 ] [ 3 ] as well as the metals lithium (Li) and magnesium (Mg).
It can also be used with highly specific meanings in specialised contexts. In the description of protein structure, in particular in the Protein Data Bank file format, a heteroatom record (HETATM) describes an atom as belonging to a small molecule cofactor rather than being part of a biopolymer chain. [ 4 ]
In the context of zeolites , the term heteroatom refers to partial isomorphous substitution of the typical framework atoms ( silicon , aluminium , and phosphorus ) by other elements such as beryllium , vanadium , and chromium . [ 5 ] The goal is usually to adjust properties of the material (e.g., Lewis acidity ) to optimize the material for a certain application (e.g., catalysis ). | https://en.wikipedia.org/wiki/Heteroatom |
Heteroatom-promoted lateral lithiation is the site-selective replacement of a benzylic hydrogen atom for lithium for the purpose of further functionalization. Heteroatom-containing substituents may direct metalation to the benzylic site closest to the heteroatom or increase the acidity of the ring carbons via an inductive effect. [ 1 ]
Toluene derivatives with heteroatom-containing substituents in the ortho position undergo site-selective benzylic lithiation in the presence of organolithium compounds (either alkyllithiums or lithium dialkylamides). Coordination of the Lewis acidic lithium atom to the Lewis basic heteroatom, as well as inductive effects derived from the electronegativity of the heteroatom, encourage selective deprotonation at the benzylic position. [ 2 ] Competitive ring metalation (directed ortho -metalation) is an important side reaction, but a judicious choice of base often allows for selective benzylic metalation. Useful heteroatom-containing directing groups include dialkylamines, [ 3 ] amides (secondary or tertiary), ketone enolates, [ 4 ] carbamates, and sulfonates. Lateral lithiation of alkyl-substituted heterocycles incorporating heteroatom-containing substituents is also possible, although ring lithiation α to the ring heteroatom may compete with lateral lithiation. [ 2 ] The products of lateral lithiation react with a variety of electrophiles, including reactive alkyl halides (allylic, benzylic, and primary), carbonyl compounds, silyl and stannyl chlorides, disulfides and diselenides, and others. A general, highly selective method for benzylic metalation using a mixed lithium and potassium metal amide (LiNK chemistry) has been developed which permits metalation regardless of the relative position ( ortho , meta or para ) of the methyl group to the heteroatom containing substituent [ 5 ]
(1)
Two limiting mechanisms, one operating under kinetic and the other thermodynamic control, have been identified for lateral lithiation reactions. The mechanisms of most lateral lithiations fall somewhere between these two limiting mechanisms, and the precise mechanism of a particular lithiation depends on two factors:
When both the Lewis acidity of the organolithium compound and the Lewis basicity of the substituent are high, as in lithiations of ortho -(dialkylamino)methyl toluenes with n -butyllithium in a non-coordinating solvent, coordination of the base to the heteroatom substituent takes place. Lithiation then occurs at the most kinetically accessible ortho benzylic position; ortho lithiation is slower in this case. [ 2 ]
(2)
As either the Lewis acidity of the base or the coordinating ability of the substituent decrease, a mechanism involving purely inductive effects becomes more important. For instance, the lithiation of 1 with lithium di(isopropyl)amide (LDA) affords only the product of benzylic metalation 2 ; none of the ortho -lithiated product 3 is observed. This result is explained by a mechanism in which the amide substituent affects the acidity of the para benzylic position solely through inductive effects and coordination of the base is not operative. Deprotonation occurs to afford the most thermodynamically stable product. [ 6 ]
(3)
In most cases, both mechanisms will lead to the same product, as the sites of kinetic and thermodynamic deprotonation will coincide.
A variety of heteroatom-containing substituents promote lateral lithiation of an ortho methyl group. Generally, better results are obtained when the heteroatom is in the β position rather than the α position, as the latter tends to promote ortho lithiation. Lithation of primary benzylic positions is slower than lithiation of methyl groups due to inductive electron donation from the additional alkyl group (rather than steric effects). [ 7 ] Electrophiles that react with the benzylic anions formed by these methods include aldehydes and ketones, activated (primary, allylic, or benzylic) halides, [ 8 ] molecular oxygen, [ 9 ] and silyl chlorides. [ 10 ] This section describes the scope of directing groups that may be used to effect site-selective lithiation in substituted benzenes and heterocycles.
Aldehyde substituents suffer nucleophilic addition in the presence of organolithium compounds; however, adducts of aldehydes with lithium diamines can serve as effective directing groups for lateral lithiation. Subsequent treatment with an electrophilic primary alkyl halide and elimination of the diamine provides functionalized aryl aldehydes. [ 11 ]
(4)
Tertiary amides are highly effective directing groups. After treatment of the resulting benzylic anion with an aldehyde, cyclization leads to lactones. [ 12 ] Carboxamides, in which the amide is attached to the aromatic ring through nitrogen rather than carbon, are also effective directing groups. [ 13 ]
(5)
Related O -aryl carbamates are good directing groups; upon warming, the resulting organolithiums undergo rearrangement to benzylic amides (the Snieckus-Fries rearrangement) via migration of the carbonyl carbon from oxygen to carbon. [ 14 ]
(6)
Secondary N -aryl carbamates (along with secondary amides, ketones, and other directing groups containing acidic hydrogens) must be treated with two equivalents of organolithium reagent for lateral lithiation to occur. In the case below, sec -butyllithium is used to avoid competitive addition to the Boc group. [ 15 ]
(7)
Sulfonamides require two equivalents of an organolithium reagent for lateral lithiation, but represent a useful class of directing groups. Treatment with ketones leads to tertiary alcohols in high yield. [ 16 ]
(8)
Convenient generation of a directing group on the nitrogen of indoles is possible through treatment with an organolithium reagent and carbon dioxide. [ 17 ] A similar method can be applied for lateral lithiations of ortho -tolyl anilines. [ 18 ]
(9)
Oxazoles containing two methyl groups exhibit interesting selectivity patterns. In the absence of a directing substituent, the methyl group closer to the more electronegative oxygen atom is selectively metalated. However, in the presence of a directing substituent, the director fully controls the site of lithiation. [ 19 ]
(10)
Ortho lithiation followed by methylation with methyl iodide is a convenient method for the synthesis of starting materials for lateral lithiations. Elaboration of the benzylic carbon through lateral lithiation and treatment with an electrophile provides a powerful synthetic alternative to direct electrophilic aromatic substitution (EAS). Although yields over the entire sequence are moderate, site selectivity is generally higher than analogous EAS reactions. [ 20 ]
(11)
Ortho lithiation can be used to generate many of the same structures as lateral lithiation; however, reactivity differences between aryl- and benzyllithium species may suggest the use of one method over the other. [ 15 ] A useful alternative method for stereoselective functionalization of the benzylic position involves the use of chromium arene complexes. Substitution at the benzylic position is much better tolerated in methods that employ benzylic lithiation of chromium arene complexes than lateral lithiations; however, the chromium byproducts of these reactions pose waste disposal difficulties. [ 21 ] The use of mixed zinc/copper organometallic reagents generated from benzyl bromides represents a second alternative to lateral lithiation. The functional group compatibility of this method is greater than lateral lithiation, but more steps are required to generate the reactive organometallic species from an unfunctionalized benzylic position. [ 22 ]
Organolithium reagents are sensitive to moisture and thus should be handled under inert atmosphere in anhydrous conditions. Tetrahydrofuran is the most common solvent employed for lateral lithiation reactions. Measurement of the concentration of commercial or prepared alkyllithium solutions may be accomplished using well-established titration methods. [ 23 ]
A useful indicator for the progress of lateral lithiations is the color of the reaction mixture. Benzyllithium compounds range in color from red to deep purple, and in many cases the lack of a color change upon addition of an organolithium reagent to the substrate may indicate the presence of an undesired proton source in solution.
Source: [ 24 ]
(12)
n -Butyllithium (14.0 mL of a 2.5 M solution in hexane, 35 mmol) was added dropwise to a solution of 2,6-dimethylanisole (4.95 mL, 35 mmol) in 60 mL of tetrahydrofuran at 0°, and the resulting solution was stirred at 0° for 1 hour and then at ambient temperature for 4 hours. The reaction mixture was cooled to 0°, treated with cyclohexanecarboxaldehyde (4.2 mL, 35 mmol), allowed to warm to ambient temperature again, and poured into saturated aqueous ammonium chloride solution. The mixture was extracted with ether and the ether extract was washed with water and brine and concentrated in vacuo. The residue was purified by silica gel chromatography (hexane-ether, 5:1 v/v) to give 4.2 g (48%) of the product as a colorless oil; 1 H NMR (CDCl 3 ) δ 1.05–1.50 (m, 6H), 1.64–1.82 (m, 4H), 1.92 (m, 1H), 2.28 (d, 1H, J = 3 Hz), 2.31 (s, 3H), 2.68 (dd, 1H, J = 10, 13 Hz), 2.85 (dd, 1H, J = 3, 13 Hz), 3.57 (m, 1H), 3.75 (s, 3H), 6.95–7.10 (m, 3H). | https://en.wikipedia.org/wiki/Heteroatom-promoted_lateral_lithiation |
A heteroazeotrope is an azeotrope where the vapour phase coexists with two liquid phases.
Sketch of a T-x/y equilibrium curve of a typical heteroazeotropic mixture
Heterogeneous distillation means that during the distillation the liquid phase of the mixture is immiscible.
In this case on the plates can be two liquid phases and the top vapour condensate splits in two liquid phases, which can be separated in a decanter.
The simplest case of continuous heteroazeotropic distillation is the separation of a binary heterogeneous azeotropic mixture. In this case the system contains two columns and a decanter. The fresh feed (A-B) is added into the first column. (The feed may also be added into the decanter directly or into the second column depending on the composition of the mixture). From the decanter the A-rich phase is withdrawn as reflux into the first column while the B-rich phase is withdrawn as reflux into the second column. This mean the first column produces "A" and the second column produces "B" as a bottoms product. In industry the butanol-water mixture is separated with this technique.
At the previous case the binary system forms already a heterogeneous azeotrope. The other application of the heteroazeotropic distillation is the separation of a binary system (A-B) forming a homogeneous azeotrope. In this case an entrainer or solvent is added to the mixture in order to form an heteroazeotrope with one or both of the components in order to help the separation of the original A-B mixture.
Batch heteroazeotropic distillation is an efficient method for the separation of azeotropic and
low relative volatility (low α) mixtures. A third component (entrainer, E) is added to the
binary A-B mixture, which makes the separation of A and B possible. The entrainer forms a
heteroazeotrope with at least one (and preferably with only one (selective entrainer)) of the
original components.
The main parts of the conventional batch distillation columns are the following:
- pot (include reboiler)
- column
- condenser to condense the top vapour
- product receivers
- (entrainer fed)
In case of the heteroazeotropic distillation the equipment is completed with a decanter, where the two liquid phases are split.
Three different cases are possible for the addition of the entrainer:
1, Batch Addition of the Entrainer: The total quantity of the entrainer is added to the charge before the start of the procedure.
2, Continuous Entrainer Feeding: The total quantity of the entrainer is introduced continuously to the column.
3, Mixed Addition of the Entrainer: The combination of the batch addition and continuous feeding of the entrainer. We added one part of the entrainer to the charge before the start of the distillation and the other part continuously during distillation.
In the last years the batch heteroazeotropic distillation has come into prominence so several studies have been published. The heteroazeotropic batch distillation was investigated by feasibility studies, rigorous simulation calculations and laboratory experiments. Feasibility analysis is conducted in Modla et al. [ 1 ] [ 2 ] and Rodriguez-Donis et al. [ 3 ] for the separation of low-relative-volatility and azeotropic mixtures by heterogeneous batch distillation in a batch rectifier. Rodriguez-Donis et al. [ 4 ] were the first to provide the entrainer selection rules. The feasibility methods was extended and modified by Rodriguez-Donis et al., [ 5 ] Rodriguez-Donis et al., (2005), Skouras et al., [ 6 ] [ 7 ] and Lang and Modla. [ 8 ] Varga [ 9 ] applied these feasibility studies in her thesis. Experimental result was published by Rodriguez-Donis et al., [ 10 ] Xu and Wand, [ 11 ] Van Kaam [ 12 ] and others. | https://en.wikipedia.org/wiki/Heteroazeotrope |
Heterobimetallic catalysis is an approach to catalysis that employs two different metals to promote a chemical reaction . Included in this definition are cases ( Scheme 1 ) where: 1 ) each metal activates a different substrate ( synergistic catalysis , used interchangeably with the terms "cooperative" and "dual" catalysis. [ 1 ] ), 2 ) both metals interact with the same substrate, and 3 ) only one metal directly interacts with the substrate(s), while the second metal interacts with the first. [ 2 ]
Complexes of palladium catalyze cross-coupling of electrophiles with organometallic nucleophiles , including those derived from lithium, tin, zinc, and boron. [ 3 ] One example is Sonogashira coupling , where catalytic amount of copper salt (e.g. CuI) reacts with a terminal alkyne (the pronucleophile) under basic conditions to generate a copper acetylide , which transmetalates onto an arylpalladium II halide, regenerating the copper halide. Reductive elimination from the arylpalladium acetylide yields the cross-coupled product. [ 2 ]
Other organic pronucleophiles are cross-coupled with arylpalladium halides in the following examples ( Scheme 2 ):
1. Gold-catalyzed cyclization of allenoates followed by cross-coupling with aryl iodides yields 4-arylbutenolides [ 4 ]
2. Borylcupration of styrenes followed by palladium-catalyzed cross-coupling with aryl halides generates α-aryl-β-boromethyl functionalized arenes. [ 5 ] [ 6 ] This reaction has been rendered diastereoselective in the case of cyclic styrenes, [ 7 ] and an enantioselective variant has also been developed. [ 8 ] Enantioselective hydroarylation of styrenes is accomplished similarly via a chiral copper hydride [ 9 ]
3. Asymmetric conjugate reduction-allylation of α,β-unsaturated ketones is achieved by Cu-H mediated reduction and subsequent allylation via a chiral PHOX-ligated palladium catalyst [ 10 ]
Also of note is the enantioselective allylation of activated nitriles ( Scheme 3 ). [ 11 ] A chiral bis phosphine -ligated rhodium catalyst activates the alpha-keto-nitrile component as its corresponding enolate , which is intercepted by a π-allylpalladium complex to yield the α-allylated nitrile in high enantiomeric excess . In the absence of the rhodium catalyst no enantioselectivity is observed, whereas the reaction does not proceed in the absence of palladium.
Catalyst systems in which both metal centers are contained in the same complex are also known (e.g. Shibasaki catalysts ); further examples are provided below.
Ion-paired combinations of early and late transition metal complexes can simultaneously interact with a substrate as both Lewis acid and Lewis base . [ 2 ] For example, carbonylative ring expansion of epoxides ( Scheme 4 ) [ 12 ] [ 13 ] [ 14 ] is accomplished by Lewis acid activation by cationic complexes of Cr III , Ti III or Al III with simultaneous ring opening by the [Co(CO) 4 ] − counterion . Carbonylation of the resultant alkylcobalt followed by lactonization releases the product.
A heterobimetallic bond-breaking process is also employed in the IPrCuFp-catalyzed C-H borylation system developed by Mankad ( Scheme 5 ). [ 15 ] Bimetallic cleavage of the B-H bond in pinacolborane generates a copper hydride (IPrCu-H) and an iron boryl [(pin)B-Fp], the latter of which borylates unactivated arenes upon UV irradiation . Bimetallic reductive elimination of H 2 from the combination of H-Fp and IPrCu-H restarts the catalytic cycle. The incorporation of copper into the catalyst is essential; C-H borylation using (pin)B-Fp alone is stoichiometric in iron due to dimerization of the HFp byproduct.
Heterobimetallic catalysts containing persistent M 1 -M 2 bonds exhibit altered reactivity due to interaction of the two different metal centers. For example, allylic amination catalyzed by the binuclear complex [Cl 2 Ti(N t BuPPh 2 ) 2 -/Pd(η 3 -CH 2 C(CH 3 )CH 2 )] + is exceptionally rapid. [ 16 ] DFT studies suggest that a Pd→Ti dative interaction accelerates the typically slow reductive elimination step by withdrawing electron density from Pd in the transition state [ 17 ] ( Scheme 6 ).
Silica-supported heterobimetallic tantalum iridium catalysts were shown exhibit drastically increased catalytic performances in H/D catalytic exchange reactions with respect to (i) monometallic analogues as well as (ii) homogeneous systems. [ 18 ] The key transition state in the C-H activation pathway, computed by DFT, involves (i) donation from the C-H σ orbital to an empty d orbital on the electrophilic early metal (Ta) together with (ii) backdonation from a filled d orbital arising from the late metal (Ir) to the C-H σ* orbital for nucleophilic assistance ( Scheme 7 ). The calculations have shown that steric effects imparted by the ancillary ligands could result in enormous differences in C-H activation energy barriers (ca. 20 kcal/mol-1) in this heterobimetallic cooperative mechanism, indicating that metals accessibility has a drastic impact on the catalytic performances. [ 19 ]
The combination of photoredox catalysis with traditional transition metal catalysis enables the use of visible light to drive challenging steps in a catalytic cycle. [ 20 ] For example, nickel-catalyzed aryl amination suffers from a difficult C-N reductive elimination step. [ 20 ] Hence instead of nickel, expensive palladium-based precatalysts are often used in combination with sterically encumbered phosphine ligands to facilitate reductive elimination. [ 20 ] A more recent approach employs an iridium-based photoredox catalyst to effect single-electron oxidation of the intermediate Ni II -amido complex. The resulting Ni III -amido rapidly undergoes reductive elimination, [ 20 ] allowing the Ni-catalyzed aryl amination to proceed at room temperature without the use of phosphine ligands.
Enzymes containing two or more different metal centers are found in several important biological systems; for example, the Mo-Fe protein of nitrogenase [ 21 ] catalyzes the conversion of N 2 to NH 3 in nitrogen fixation . Of more relevance to human biology, Cu-Zn superoxide dismutase protects cells from oxidative stress by converting superoxide , O 2 − , to O 2 and hydrogen peroxide [ 22 ] | https://en.wikipedia.org/wiki/Heterobimetallic_catalysis |
Heteroboranes are classes of boranes in which at least one boron atom is replaced by another elements . Like many of the related boranes, these clusters are polyhedra and are similarly classified as closo- , nido- , arachno- , and hypho- , according to the so-called electron count . Closo- represents a complete polyhedron, while nido- , arachno- and hypho- stand for polyhedrons that are missing one, two and three vertices.
Besides carbon ( carboranes or carbaboranes), other elements can also be included in the heteroborane molecules as well, such as Si (silaboranes), N ( azaboranes , including borazine ), P (phosphaboranes), As (arsaboranes), Sb (stibaboranes), O (oxaboranes [ 1 ] ), S (thiaboranes [ 2 ] [ 3 ] ), Se (selenaboranes) and Te (telluraboranes), either alone or in combination. [ 4 ] [ 5 ]
Structurally, some heteroboranes can be derived from the icosahedral ( I h ) [B 12 H 12 ] 2− anion via formal replacement of its B H fragments with isoelectronic C H + , P + or S 2+ fragments, [ citation needed ] e.g., closo -1- [CB 11 H 12 ] − and closo -1,2- C 2 B 10 H 12 (two of the carboranes), closo -1,2- P 2 B 10 H 10 [ 6 ] (one of the phosphaboranes) or closo -1- SB 11 H 11 [ 2 ] (one of the thiaboranes).
Heteroboranes are used in various fields, such as drug discovery , imaging [ clarification needed ] , and nanotechnology . [ citation needed ] | https://en.wikipedia.org/wiki/Heteroborane |
Heterochromatin is a tightly packed form of DNA or condensed DNA , which comes in multiple varieties. These varieties lie on a continuum between the two extremes of constitutive heterochromatin and facultative heterochromatin . Both play a role in the expression of genes . Because it is tightly packed, it was thought to be inaccessible to polymerases and therefore not transcribed; however, according to Volpe et al. (2002), [ 1 ] and many other papers since, [ 2 ] much of this DNA is in fact transcribed, but it is continuously turned over via RNA-induced transcriptional silencing (RITS). Recent studies with electron microscopy and OsO 4 staining reveal that the dense packing is not due to the chromatin. [ 3 ]
Constitutive heterochromatin can affect the genes near itself (e.g. position-effect variegation ). It is usually repetitive and forms structural functions such as centromeres or telomeres , in addition to acting as an attractor for other gene-expression or repression signals.
Facultative heterochromatin is the result of genes that are silenced through a mechanism such as histone deacetylation or Piwi-interacting RNA (piRNA) through RNAi . It is not repetitive and shares the compact structure of constitutive heterochromatin. However, under specific developmental or environmental signaling cues, it can lose its condensed structure and become transcriptionally active. [ 4 ]
Heterochromatin has been associated with the di- and tri -methylation of H3K9 in certain portions of the human genome. [ 5 ] H3K9me3 -related methyltransferases appear to have a pivotal role in modifying heterochromatin during lineage commitment at the onset of organogenesis and in maintaining lineage fidelity. [ 6 ]
Chromatin is found in two varieties: euchromatin and heterochromatin. [ 7 ] Originally, the two forms were distinguished cytologically by how intensely they get stained – the euchromatin is less intense, while heterochromatin stains intensely, indicating tighter packing. Heterochromatin was given its name for this reason by botanist Emil Heitz who discovered that heterochromatin remained darkly stained throughout the entire cell cycle, unlike euchromatin whose stain disappeared during interphase. [ 8 ] Heterochromatin is usually localized to the periphery of the nucleus .
Despite this early dichotomy, recent evidence in both animals [ 9 ] and plants [ 10 ] has suggested that there are more than two distinct heterochromatin states, and it may in fact exist in four or five 'states', each marked by different combinations of epigenetic marks.
Heterochromatin mainly consists of genetically inactive satellite sequences , [ 11 ] and many genes are repressed to various extents, although some cannot be expressed in euchromatin at all. [ 12 ] Both centromeres and telomeres are heterochromatic, as is the Barr body of the second, inactivated X-chromosome in a female.
Heterochromatin has been associated with several functions, from gene regulation to the protection of chromosome integrity; [ 13 ] some of these roles can be attributed to the dense packing of DNA, which makes it less accessible to protein factors that usually bind DNA or its associated factors. For example, naked double-stranded DNA ends would usually be interpreted by the cell as damaged or viral DNA, triggering cell cycle arrest, DNA repair or destruction of the fragment, such as by endonucleases in bacteria.
Some regions of chromatin are very densely packed with fibers that display a condition comparable to that of the chromosome at mitosis . Heterochromatin is generally clonally inherited; when a cell divides, the two daughter cells typically contain heterochromatin within the same regions of DNA, resulting in epigenetic inheritance . Variations cause heterochromatin to encroach on adjacent genes or recede from genes at the extremes of domains. Transcribable material may be repressed by being positioned (in cis ) at these boundary domains. This gives rise to expression levels that vary from cell to cell, [ 14 ] which may be demonstrated by position-effect variegation . [ 15 ] Insulator sequences may act as a barrier in rare cases where constitutive heterochromatin and highly active genes are juxtaposed (e.g. the 5'HS4 insulator upstream of the chicken β-globin locus, [ 16 ] and loci in two Saccharomyces spp. [ 17 ] [ 18 ] ).
All cells of a given species package the same regions of DNA in constitutive heterochromatin , and thus in all cells, any genes contained within the constitutive heterochromatin will be poorly expressed . For example, all human chromosomes 1 , 9 , 16 , and the Y-chromosome contain large regions of constitutive heterochromatin. In most organisms, constitutive heterochromatin occurs around the chromosome centromere and near telomeres.
The regions of DNA packaged in facultative heterochromatin will not be consistent between the cell types within a species, and thus a sequence in one cell that is packaged in facultative heterochromatin (and the genes within are poorly expressed) may be packaged in euchromatin in another cell (and the genes within are no longer silenced). However, the formation of facultative heterochromatin is regulated, and is often associated with morphogenesis or differentiation . An example of facultative heterochromatin is X chromosome inactivation in female mammals: one X chromosome is packaged as facultative heterochromatin and silenced, while the other X chromosome is packaged as euchromatin and expressed.
Among the molecular components that appear to regulate the spreading of heterochromatin are the Polycomb-group proteins and non-coding genes such as Xist . The mechanism for such spreading is still a matter of controversy. [ 19 ] The polycomb repressive complexes PRC1 and PRC2 regulate chromatin compaction and gene expression and have a fundamental role in developmental processes. PRC-mediated epigenetic aberrations are linked to genome instability and malignancy and play a role in the DNA damage response, DNA repair and in the fidelity of replication . [ 20 ]
Saccharomyces cerevisiae , or budding yeast, is a model eukaryote and its heterochromatin has been defined thoroughly. Although most of its genome can be characterized as euchromatin, S. cerevisiae has regions of DNA that are transcribed very poorly. These loci are the so-called silent mating type loci (HML and HMR), the rDNA (encoding ribosomal RNA), and the sub-telomeric regions.
Fission yeast ( Schizosaccharomyces pombe ) uses another mechanism for heterochromatin formation at its centromeres. Gene silencing at this location depends on components of the RNAi pathway. Double-stranded RNA is believed to result in silencing of the region through a series of steps.
In the fission yeast Schizosaccharomyces pombe , two RNAi complexes, the RITS complex and the RNA-directed RNA polymerase complex (RDRC), are part of an RNAi machinery involved in the initiation, propagation and maintenance of heterochromatin assembly. These two complexes localize in a siRNA -dependent manner on chromosomes, at the site of heterochromatin assembly. RNA polymerase II synthesizes a transcript that serves as a platform to recruit RITS, RDRC and possibly other complexes required for heterochromatin assembly. [ 21 ] [ 22 ] Both RNAi and an exosome-dependent RNA degradation process contribute to heterochromatic gene silencing. These mechanisms of Schizosaccharomyces pombe may occur in other eukaryotes. [ 23 ] A large RNA structure called RevCen has also been implicated in the production of siRNAs to mediate heterochromatin formation in some fission yeast. [ 24 ] | https://en.wikipedia.org/wiki/Heterochromatin |
Parabiosis is a laboratory technique used in physiological research, derived from the Greek word meaning "living beside." The technique involves the surgical joining of two living organisms in such a way that they develop a single, shared physiological system . Through this approach, researchers can study the exchange of blood , hormones , and other substances between the two organisms, allowing for the examination of a wide range of physiological phenomena and interactions. Parabiosis has been employed in various fields of study, including stem cell research, endocrinology , aging research , and immunology .
Heterochronic parabiosis involves parabiosis of animals of different ages; this allows researchers to study how circulating blood-borne factors influence aging and tissue regeneration. The method has led to insights into stem cell function, neurogenesis , regeneration (biology) , and aging . In contrast, isochronic parabiosis joins two animals of the same age.
Parabiosis combines two living organisms which are joined surgically and develop single, shared physiological systems. [ 1 ] [ 2 ] Researchers can prove that the feedback system in one animal is circulated and affects the second animal via blood and plasma exchange.
Parabiotic experiments were pioneered by Paul Bert in the mid-1800s. He postulated that surgically connected animals could share a circulatory system. Bert was awarded the Prize of Experimental Physiology of the French Academy of Science in 1866 for his discoveries. [ 3 ]
One limitation of the experiments is that outbred rats cannot be used because it can lead to a significant loss of pairs due to intoxication of the blood supply from a dissimilar rat. [ 4 ]
Many of the parabiotic experiments since 1950 involve research regarding metabolism. One of these experiments was published in 1959 by G. R. Hervey in the Journal of Physiology . This experiment supported the theory that damage to the hypothalamus , particularly the ventromedial hypothalamus, leads to obesity caused by the overconsumption of food. The study's rats were from the same litter, which had been a closed colony for multiple years. The two rats in each pair had no more than a 3% difference in weight. Rats were paired at four weeks old. Unpaired rats were used as controls. The rats were conjoined in three ways. In early experiments, the peritoneal cavities were opened and connected between the two rats. In later experiments, to avoid the risk of tangling the two rats’ intestines together, smaller cuts were made. After further refinement of the experimental procedure, the abdominal cavities were not opened, and the rats were conjoined at the hip bone with minimal cutting. To prove that the two animals were sharing blood, researchers injected dye into one rat's veins, and the pigment would show up in the conjoined rat.
In each pair, one rat became obese and exhibited hyperphagia. The weight of the rat with the surgical lesion rose rapidly for a few months, then reached a plateau as a direct result of the surgical procedure. After the procedure, the rat with the impaired hypothalamus ate voraciously while the paired rat's appetite decreased. The paired rat became obviously thin throughout the experiment, even rejecting food when it was offered. [ 5 ] [ 6 ]
Later studies identified this satiety factor as the adipose -derived hormone leptin . Many hormones and metabolites were proven not to be the satiety factor that caused one rat to starve in the experiments. Leptin seemed like a viable candidate. Starting in 1977, Ruth B.S. Harris, a graduate student under Hervey, repeated previous studies about parabiosis in rats and mice. Due to the discovery of leptin, she analyzed leptin concentrations of the mice in the parabiotic experiments. After injecting leptin into each pair's obese mouse, she found that leptin circulated between the conjoined animals, but the circulation of leptin took some time to reach equilibrium. As a result of the injections, the almost immediate weight loss resulted in the parabiotic pairs due to increased inhibition. Approximately 50–70% of fat was lost in pairs. The obese mouse lost only fat. The lean mouse lost muscle mass and fat. Harris concluded that leptin levels are increased in obese animals, but other factors could also affect them. Also, leptin was determined to decrease fat storage in both obese and thin animals. [ 4 ]
Early parabiotic experiments also included cancer research. One study, published in 1966 by Friedell, studied radiation's effects with X-rays on ovarian tumors. To study the tumors, two adult female rats were conjoined. The left rat was shielded, and the right rat was exposed to high levels of radiation. The rats were given a controlled amount of food and water. 149 of 328 pairs showed possible ovarian tumors in the irradiated animals, but not in their partners. This result matched previous studies of single rats. [ 7 ]
Chronic diseases of age are studied by conjoining an older animal with a younger animal. Known as heterochronic parabiosis , this process has been used in studies to investigate the age-related and disease-related changes in the composition of the blood, especially plasma proteome . [ 8 ] This process could be used to research cardiovascular disease, diabetes, osteoarthritis, and Alzheimer's disease. As animals age, their oligodendrocytes reduce in efficiency, resulting in decreased myelination , causing negative effects on the central nervous system (CNS). Julia Ruckh and fellow researchers have used parabiosis to study remyelination from adult stem cells to see if conjoining young with older mice could reverse or delay this process. The two mice were conjoined in the experiment, and demyelination was induced via injection into the older mice. The experiment determined that the younger mice's factors reversed CNS demyelination in older mice by revitalizing the oligodendrocytes. The monocytes from the younger mice also enhanced the older mice's ability to clear myelin debris because the young monocytes can clear lipids from myelin sheaths more effectively than older monocytes. The conjoining of the two animals reversed the effects of age on the myelination cells. The ability of the young mouse's cells was unaffected. Enhanced immunity from the younger mouse also promoted the general health of the older mouse in each pair. The results of this experiment could lead to therapy processes for people with demyelinating diseases like multiple sclerosis. [ 9 ] [ 3 ]
Julia Ruckh and fellow researchers have used parabiosis to study remyelination from adult stem cells to see if conjoining young with older mice could reverse or delay this process. The two mice were conjoined in the experiment, and demyelination was induced via injection into the older mice. The experiment determined that the younger mice's factors reversed CNS demyelination in older mice by revitalizing the oligodendrocytes. The monocytes from the younger mice also enhanced the older mice's ability to clear myelin debris because the young monocytes can clear lipids from myelin sheaths more effectively than older monocytes. The conjoining of the two animals reversed the effects of age on the myelination cells. The ability of the young mouse's cells was unaffected. Enhanced immunity from the younger mouse also promoted the general health of the older mouse in each pair. The results of this experiment could lead to therapy processes for people with demyelinating diseases like multiple sclerosis. [ 10 ] < [ 3 ]
Studies using heterochronic parabiosis have shown that exposure of old mice to young blood can reverse some age-related impairments in multiple tissues, including the brain, liver, heart, and skeletal muscle. Conversely, young mice exposed to old blood often show signs of accelerated aging.
The term is also applicable to spontaneously occurring conditions such as in conjoined twins . [ 13 ]
Obligate parasitic reproduction of Anglerfish of the family Ceratiidae , in which the circulatory systems of the males and females unite completely. Without the attachment of males to females, the endocrine functions cannot mature; the individuals fail to develop properly and die young and without reproducing. [ 14 ]
Plants growing closely together roots or stems in intimate contact sometimes form natural grafts. In parasitic plants such as mistletoe and dodder the haustoria unite the circulatory systems of the host and the parasite so intimately that parasitic twiners such as Cassytha may act as vectors carrying disease organisms from one host plant to another. [ 15 ]
Ant colonies can share their nests with essentially unrelated species of ants, and even non-ants . They did not obviously share anything beyond the nests' upkeep, even segregating their brood, so these were very surprising observations; most ants are radically intolerant of intruders, usually including even intruders of their own species.
In the early 20th century Auguste-Henri Forel coined the term "parabiosis" for such associations, and it was adopted by the likes of William Morton Wheeler . [ 16 ] [ 17 ] Furthermore, there is evidence for the partitioning of functions of work between the two species in the nest. [ 18 ] Early reports that parabiotic ant colonies forage and feed together peacefully also have been qualified by observations that revealed ants of one species in such an association aggressively displacing members of the other species from artificially provided food, while also profiting by following their recruitment trails to new food sources. [ 17 ] Benefits from shared nest defence and maintenance even when there is neither direct cooperation nor interaction between the two associated populations in a nest. [ 19 ]
Parabiosis derives most directly from Neo-Latin , [ 13 ] but the Latin in turn derives from two classical Greek roots. The first is παρά ( para ) for "beside" or "next to". In modern etymology, this root appears in various senses, such as "close to", "outside of", and "different".
The second classical Greek root from which the Latin derives is βίος ( bios ), meaning "life." | https://en.wikipedia.org/wiki/Heterochronic_parabiosis |
Heteroclinic channels are ensembles of trajectories that can connect saddle equilibrium points in phase space . [ 1 ] Dynamical systems and their associated phase spaces can be used to describe natural phenomena in mathematical terms; heteroclinic channels, and the cycles (or orbits) that they produce, are features in phase space that can be designed to occupy specific locations in that space. Heteroclinic channels move trajectories from one equilibrium point to another. More formally, a heteroclinic channel is a region in phase space in which nearby trajectories are drawn closer and closer to one unique limiting trajectory, the heteroclinic orbit. Equilibria connected by heteroclinic trajectories form heteroclinic cycles and cycles can be connected to form heteroclinic networks . [ 2 ] Heteroclinic cycles and networks naturally appear in a number of applications, such as fluid dynamics , [ 3 ] [ 4 ] population dynamics , [ 5 ] and neural dynamics. [ 6 ] [ 7 ] In addition, dynamical systems are often used as methods for robotic control. In particular, for robotic control, the equilibrium points can correspond to robotic states, and the heteroclinic channels can provide smooth methods for switching from state to state. [ 8 ] [ 9 ]
Heteroclinic channels (or heteroclinic orbits ) are building blocks for a subset of dynamical systems that are built around connected saddle equilibrium points. Homoclinic channels/orbits join a single equilibrium point to itself, whereas heteroclinic channels join two different saddle equilibrium points in phase space. The connection is formed from the unstable manifold of the first saddle (“pushing away” from that point) to the stable manifold of the next saddle point (“pulling towards” this point). Combining at least three saddle equilibria in this way produces a heteroclinic cycle, [ 1 ] and multiple heteroclinic cycles can be connected into heteroclinic networks.
Heteroclinic channels have both spatial and temporal features in phase space. Spatial because they affect trajectories within a certain region around themselves, [ 1 ] and temporal because the parameters of a heteroclinic channel affect how much time a trajectory spends along that channel (or more specifically, how much time it spends around one of the saddle points). [ 9 ] The transient nature of heteroclinic channels is important for describing their “switching” nature. That is, some neighborhood around each equilibrium point can be defined as a separate state, and the heteroclinic channel itself presents a method of switching sequentially between these states. [ 8 ] [ 10 ]
Heteroclinic "switching" is an important descriptor for natural phenomena, especially in neural dynamics . It has also been used as an approach for designing robotic control methods which cycle between states, whether those states are pre-defined behaviors [ 11 ] or transient states that lead to larger behaviors. [ 12 ]
The mathematical image described above – a series of states with a functional mechanism for switching between them – also describes a phenomenon known as winnerless competition (WLC). Winnerless competition describes the switching phenomenon between two competitive states and was identified by Busse & Heikes in 1980 when they were investigating the change of phases in a convection cycle. [ 3 ] However, the transient dynamics of WLC are widely agreed to first have been presented by Alfred J. Lotka , who first developed the concept to describe autocatalytic chemical reactions in 1910 [ 13 ] and then developed an extended version in 1925 to describe ecological predator-prey relationships. In 1926, Vito Volterra independently published the same set of equations with a focus on mathematical biology, especially multi-species interactions. [ 14 ] These equations, now known as the Lotka-Volterra equations , are widely used as a mathematical model to describe transient heteroclinic switching dynamics.
Heteroclinic cycles which describe the transition between at least three states were first described by May & Leonard in 1975. They identified a special case of the Lotka-Volterra equations for population dynamics [ 5 ] . The re-emergence of heteroclinic cycles and the increased ability to do numerical computations as compared to the period of Lotka and Volterra, prompted a resurgence of interest in heteroclinic channels, cycles, and networks as mathematical models for transient sequential dynamics.
Heteroclinic channels have become models for neural dynamics. An example is Laurent et al. (2001) who described the neural responses of fish and insects to olfactory stimuli as a WLC system, where each stimulus and its response could be identified as a separate state within the space. [ 15 ] The responses could be modeled in this way because of their spatial and temporal properties, which aligned with the spatiotemporal nature of WLC. Rabinovich et al. (2001) & Afraimovich et al. (2004) used WLC networks (via the Fitzhugh-Nagumo & Lotka-Volterra models , respectively) to connect the mathematical concept of stable heteroclinic channels (SHCs) to transient neural dynamics more generally, [ 6 ] [ 16 ] particularly other sensory processes and more abstract neural connections. Rabinovich et al. (2008) expanded this idea to larger cognitive dynamic systems, and large-scale brain networks. [ 9 ] [ 17 ] [ 18 ] Stable heteroclinic channels have also been used to model neuromechanical systems. The feeding structures and associated feeding processes (stages of swallowing) of marine mollusks have been analyzed using heteroclinic channels. [ 19 ] [ 20 ]
Biological models have always been a source of inspiration for roboticists, especially those interested in robotic control. Since robotic control requires defining and sequencing the physical actions of the robot, models of neural dynamics can be very useful. An example of this can be found in central pattern generators , which are widely used for rhythmic robotic motion. [ 21 ] Heteroclinic channels have been used to replicate central pattern generators for robot control. [ 12 ] Similarly, dynamic movement primitives, another common robotic motion control system, have been adapted and made more flexible by using heteroclinic channels. [ 22 ] In more practical applications, stable heteroclinic channels have been directly used in the control of several biologically-inspired robots [ 11 ] [ 23 ] [ 24 ]
A dynamical system is a rule or set of rules that describe the evolution of a state (or a system of states) in time. The set of all possible states is called the state space . The phase space is the state space of a continuous system. Dynamical systems describe the state over time with mathematical equations, often ordinary differential equations . The current state at a particular time can be plotted as a point in phase space. The set of points over time can be plotted as a trajectory.
A heteroclinic channel itself can be asymptotically stable. That is, any point near the vicinity of the channel is attracted to the heteroclinic cycle at the core of the channel. Both heteroclinic channels and cycles can be robust (or structurally stable ) if, within a given parameter range, they maintain a given behavior; however, this is not required.
Noise is one input into a heteroclinic system to move it from one equilibrium to the next. The reason is that noise (or some other stochasticity ) disturbs the system enough to move it into the vicinity of the next saddle equilibrium point in the sequence. The amount of noise required is inversely proportional to the “attractiveness” of the saddle points; the more attractive the stable part of the saddle is to the system state, the longer the trajectory will linger in its vicinity, and the more noise will be required to move the system’s state off of that attractive equilibrium point. There are also other ways of moving between the equilibrium points including parametric changes, or using sensory feedback. [ 19 ] [ 25 ]
Control theory , in robotics, deals with the use of dynamical systems to control robotic systems. The goal of robotic control is to perform precise, coordinated actions using physical actuators in response to sensor input. Dynamical systems can be used to drive the robot to a desired state (or set of states) using sensor input to minimize actuator errors.
An equilibrium point in a dynamical system is a solution to the system of differential equations describing a trajectory that does not change with time. Equilibrium points can be described by their stability, which are often determined by the eigenvalues of the system’s Jacobian matrix . In general, the eigenvalues of a saddle point have non-zero real parts, at least one of the real parts is positive and at least one of the real parts is negative. Any eigenvalue with a negative real value indicates a stable manifold of the saddle which attracts trajectories, whereas any eigenvalue with a positive real value indicates the unstable manifold of the saddle which repels trajectories.
Let x ˙ = f ( x ) {\textstyle {\dot {x}}=f(x)} be the ordinary differential equation describing a continuous dynamical system. If there are equilibria at x = x 0 {\displaystyle x=x_{0}} and x = x 1 {\displaystyle x=x_{1}} , then a solution ϕ ( t ) {\displaystyle \phi (t)} is a heteroclinic connection from x 0 {\displaystyle x_{0}} to x 1 {\displaystyle x_{1}} if
ϕ ( t ) → x 0 {\displaystyle \phi (t)\rightarrow x_{0}} as t → − ∞ {\displaystyle t\rightarrow -\infty }
and
ϕ ( t ) → x 1 {\displaystyle \phi (t)\rightarrow x_{1}} as t → + ∞ {\displaystyle t\rightarrow +\infty }
This implies that the connection is contained in the stable manifold of x 1 {\displaystyle x_{1}} and the unstable manifold of x 0 {\displaystyle x_{0}} .
Neural dynamics are the non-linear dynamics that describe neural processes, from single neurons to cognitive processes and large-scale neural systems.
This model was first presented independently by Alfred J. Lotka for autocatalytic chemical reactions [ 13 ] and then again for biological species in competition by Vito Volterra from a mathematical biology perspective. [ 14 ] Originally, this model was only considered for two species: the two chemical species in the reaction, or a predator-prey situation in a shared environment.
The original equations were based on the logistic population equation , which is popularly used in ecology .
d x d t = r x ( 1 − x K ) {\displaystyle {dx \over dt}=rx{\Bigl (}1-{x \over K}{\Bigr )}}
where x {\displaystyle x} is the size or concentration of a species at a given time, r {\displaystyle r} is the growth rate and K {\displaystyle K} is the carrying capacity of that species.
Lotka incorporated a term for the interaction between species and, with some generalization, the series of equations can be written as follows:
d x i ( t ) d t = r x i ( t ) [ 1 − ∑ j = 1 N α i j x j ( t ) ] {\displaystyle {dx_{i}(t) \over dt}=rx_{i}(t){\Biggl [}1-\sum _{j=1}^{N}\alpha _{ij}x_{j}(t){\Biggr ]}}
In this definition, x i ( t ) {\displaystyle x_{i}(t)} is the size or concentration of the i {\displaystyle i} -th species and N {\displaystyle N} is the total number of species. The interaction between each species is described by the matrix α {\displaystyle \alpha } .
May and Leonard expanded the Lotka-Volterra equations by investigating the system in which three species interact with each other (i.e., N = 3 {\displaystyle N=3} ). They found that for a system in which each equilibrium point is a saddle with an N − 1 {\displaystyle N-1} dimensional stable manifold, and the unstable manifold connects the points sequentially, the equation above can be re-written as follows:
d x i ( t ) d t = x i ( t ) [ 1 − ∑ j = 1 N ρ i j x j ( t ) ] {\displaystyle {dx_{i}(t) \over dt}=x_{i}(t){\Biggl [}1-\sum _{j=1}^{N}\rho _{ij}x_{j}(t){\Biggr ]}}
Explicitly for N = 3 {\displaystyle N=3} , this becomes
where the coupling matrix, ρ {\displaystyle \rho } , is given by
ρ = [ 1 α β β 1 α α β 1 ] . {\displaystyle \rho ={\begin{bmatrix}1&\alpha &\beta \\\beta &1&\alpha \\\alpha &\beta &1\end{bmatrix}}.}
In this model, the stability of the saddle equilibria can be easily determined. The stability requirements for the formation of a stable heteroclinic cycle are α + β ≥ 2 {\displaystyle \alpha +\beta \geq 2} with either α > 1 {\displaystyle \alpha >1} or β > 1 {\displaystyle \beta >1} . [ 5 ]
It was noted in this work that the system never asymptotically reaches any of the equilibrium points, but the amount of time the trajectory spends at each equilibrium point increases with time. In ecological terms, this suggests that a single population would eventually “beat out” the other two. May & Leonard noted that this is not a practical result in biology (and also see). [ 25 ]
The “Winnerless Competition” framework (suggested by Laurent et al.) [ 15 ] allowed a single neuron and/or a collection of synchronized neurons to be encoded between "on" and "off". Laurent et al. investigated olfaction in fish and insects, particularly olfactory reception, and some of the postsynaptic structures in the odor sensory system . They found that the processing (or encoding) of perceived odors occurred over at least three timescales: fast, intermediate, and slow. They posited that an odor encoding system should be reproducible, which requires it to be insensitive to (or rapidly forget) any initial state. This is only possible if the dynamical system is strongly dissipative, that is, it settles on a state quickly and is insensitive to internal noise. Conversely, a useful odor encoding system should be sensitive to small variations in input, which requires the system to be active. An active system uses external sources to allow small variations in initial states to grow with time. The winnerless competition framework allowed a single neuron (or node) to encode a stimulus (the “fast” timescale), or many stimuli could be encoded via stimulus-specific trajectories (the “slow” timescale).
The winnerless competition system was described by
d x i ( t ) d t = x i ( t ) [ 1 − ∑ j = 1 N ρ i j x j ( t ) ] + S i s {\displaystyle {dx_{i}(t) \over dt}=x_{i}(t){\Biggl [}1-\sum _{j=1}^{N}\rho _{ij}x_{j}(t){\Biggr ]}+S_{i}^{s}}
where x i ( t ) {\displaystyle x_{i}(t)} and x j ( t ) {\displaystyle x_{j}(t)} characterize the activities of stimulus-specific groups i {\displaystyle i} and j {\displaystyle j} , respectively, N {\displaystyle N} is the number of neurons being simulated, ρ i j > 0 {\displaystyle \rho _{ij}>0} characterizes the strength of inhibition by i {\displaystyle i} and j {\displaystyle j} (i.e., their interactions with each other), and S i s ( t ) {\displaystyle S_{i}^{s}(t)} is the current input by a stimulus s {\displaystyle s} to i {\displaystyle i} . [ 15 ]
Winnerless competition required that the inhibitory connections in the ρ {\displaystyle \rho } matrix were asymmetrical and cyclic. For example, for N = 3 {\displaystyle N=3} , if ρ 11 , ρ 22 , ρ 33 = 1 {\displaystyle \rho _{11},\rho _{22},\rho _{33}=1} then ρ 12 , ρ 23 , ρ 31 > 1 {\displaystyle \rho _{12},\rho _{23},\rho _{31}>1} , and ρ 21 , ρ 32 , ρ 13 < 1 {\displaystyle \rho _{21},\rho _{32},\rho _{13}<1} .
Overall, this description produces a heteroclinic channel composed of several heteroclinic orbits (trajectories).
Sensory encoding via heteroclinic orbits (which are facilitated by heteroclinic channels) as described by Laurent et al. was extrapolated beyond the olfactory system. Rabinovich et al. [ 6 ] explored winnerless competition as a spatiotemporal dynamical system corresponding to the activity of specific neurons or groups of neurons. They identified the added stimulus as the factor that would drive a trajectory from one node along the channel to the next. Without it, the system would reduce to a steady state in which one neuron (or neuronal group) was active whereas the others were quiescent.
Afraimovich et al. [ 16 ] also developed winnerless competition using connected saddle points in phase space as a model for transient, sequential neural activity. They outlined how the saddle points should be defined, the conditions for heteroclinic connections between them and the conditions for heteroclinic sequence stability. They performed numerical simulations of the dynamics of a network with N = 50 neurons and used Gaussian noise as the external input. They found that the movement of a trajectory along each connection was initiated by the noise, and the speed of switching from one saddle to the next depended on the noise level.
The sequential switching property of stable heteroclinic channels has been expanded to describe higher-level transient cognitive dynamics, particularly sequential decision making . Rabinovich et al. [ 17 ] first introduced this idea by applying the sequential switching that characterizes stable heteroclinic channels to the sequential decision making process seen in a fixed time game. The player takes sequential actions in a changing environment to maximize some reward. For a fixed time game, in order to maximize the reward, the player must encounter as many decision states as possible. This means that within a fixed amount of time, the trajectory must pass in the vicinity of as many saddle points, or nodes, as possible. When the trajectory reached the vicinity of a saddle point, a decision-making function was applied.
The reward was maximized by choosing appropriate system parameters. One of these was a decision-making rule that corresponded to the fastest motion away from the saddle, which was the shortest time to reach the next saddle. Additionally, there was an optimal level of additive noise; the noise was high enough that the trajectory could move away from each saddle quickly, but not so high that the trajectory would be directed off the cycle entirely.
A major point of this work was that, without significant external stimulus, the player was likely to find one of two extremes: ending decision-making quickly or reaching a cycle that runs through the entire allotted time. Behaviorally, this cycle translates to habit formation (on a cognitive level) and is sensitive to external stimuli that can change the trajectory’s direction at any time.
Rabinovich & Varona [ 18 ] described sequential memory in a similar way. They also introduced “chunking”, which describes how the brain groups sequential information items into chunks at different hierarchical levels. They used stable heteroclinic channels as a framework for building these chunks into high level heteroclinic networks.
Heteroclinic channels have also been used as a model for neuromechanical systems in animals, particularly the feeding structures in marine mollusks. [ 19 ] [ 20 ] Shaw et al. (2015) investigated potential models for the feeding behavior of Aplysia californica . They found that heteroclinic channels could more accurately match features of actual experimental data than other models such as limit cycles . Lyttle et al. (2017) showed that both the heteroclinic model and the limit cycle model of the Aplysia californica ’s feeding system grant different advantages and disadvantages, such as robustness to perturbations and flexibility to inputs. They also showed that a reasonable model of the animal’s behavior could be made by switching between these modes, heteroclinic and limit cycle, using external sensory input, providing a dynamical basis for understanding both robustness and flexibility in motor systems.
Mathematical expansions of the framework are required for robotic control applications.
For higher dimensional systems, the connection/inhibition matrix ρ {\displaystyle \rho } can be generalized as:
ρ = [ 1 α γ γ ⋯ γ β β 1 α γ γ ⋯ γ γ β 1 α γ ⋯ γ γ γ β 1 α ⋯ γ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ γ ⋯ γ β 1 α α γ ⋯ γ γ β 1 ] {\displaystyle \rho ={\begin{bmatrix}1&\alpha &\gamma &\gamma &\cdots &\gamma &\beta \\\beta &1&\alpha &\gamma &\gamma &\cdots &\gamma \\\gamma &\beta &1&\alpha &\gamma &\cdots &\gamma \\\gamma &\gamma &\beta &1&\alpha &\cdots &\gamma \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\gamma &\cdots &&\gamma &\beta &1&\alpha \\\alpha &\gamma &\cdots &\gamma &\gamma &\beta &1\end{bmatrix}}}
or formulations similar to this. [ 26 ]
Appropriate saddle values must be assigned to make the system dissipative. The strength of the saddle can be characterized by its two largest eigenvalues: the single unstable eigenvalue, λ u {\displaystyle \lambda ^{u}} , and the weakest stable eigenvalue, − λ s {\displaystyle -\lambda ^{s}} . The saddle value of the i {\displaystyle i} - th node can be defined as
v i = R e λ i s λ i u {\displaystyle v_{i}={Re\lambda _{i}^{s} \over \lambda _{i}^{u}}}
If v i > 1 {\displaystyle v_{i}>1} , the i {\displaystyle i} -th node is dissipative and stable, and if ∏ i = 1 N v i > 1 {\displaystyle \textstyle \prod _{i=1}^{N}\displaystyle v_{i}>1} the entire cycle will be stable. [ 27 ]
SHCs have been used directly to control robots, particularly biologically-inspired robotic systems . SHCs have also been used to adapt existing robot control frameworks. In both instances, the special properties of SHCs were used to improve the associated control tasks. Some examples include integrated contact sensing to modulate the additive SHC noise, [ 11 ] a combined Gaussian Mixture Model to inform SHC "switching", [ 24 ] a central pattern generator which was adapted to be temporally sensitive, [ 12 ] and a modified control framework which has an intuitive visualization property. [ 22 ]
SHCs make it possible to use sensory feedback for rapid choices in a high degree-of-freedom robotic system. For example, Daltorio et al. used SHCs as a controller for the simulated locomotion of a worm-like robot in a pipe. [ 11 ] The robot's structure consisted of 12 actuated body segments, each with one degree of freedom: segment length. Each segment coupled its height to the length such that as the length decreased, the height increased. This structure was used to simulate peristaltic locomotion, as the segments’ actuation was coordinated to form a peristaltic wave down the robot, with each segment contracting one after the other down the robot body.
For this system, each body segment was associated with a saddle point in the SHC system. The multi-dimensional connection matrix was constructed so that each point inhibited its neighbors except the point immediately after it. This asymmetry caused the active SHC node to move “backwards” down the robot structure, while the body moved forward.
The controller was tested in multiple pipe-shaped paths where contact sensors on the robot could provide information on the environment. Contact sensing information was used to modulate the noise added to the system, which in turn allowed the activation sequence to be altered. This was key for highly coordinated movement across all segments.
SHCs can be used to inform the switching among complex configurations. Petrič et al. used a combined Gaussian Mixture Model (GMM) and SHC system to control a spinal exoskeleton. [ 24 ] The exoskeleton was designed as a quasi-passive system that physically supports the user to different degrees depending on the current pose or movement of the user. Different functional poses/movements were identified as the nodes within the SHC system. GMMs were used to indicate what the additive inputs for each SHC node should be, which would drive the system from one pose to the next.
SHCs have been used as an alternative to central pattern generators for robotic control. [ 12 ] Horchler et al. used SHCs to produce an oscillator whose behavior near each node could be manipulated using system parameters: additive noise and saddle values. This produced a cyclic controller that could spend more time at a particular node when needed. The controller's responsiveness to external input was demonstrated by pausing and resetting the cycle using additive noise.
Rouse & Daltorio replaced the underlying attractor points of dynamic movement primitives, another biologically-inspired robotic control method, with the saddle points of SHCs. [ 22 ] This adaptive framework maintained the stability of the system. Additionally, it provided a visualization property which allowed the user to intuitively place saddle points in phase space to match a desired trajectory in the task space. | https://en.wikipedia.org/wiki/Heteroclinic_channels |
In mathematics , a heteroclinic cycle is an invariant set in the phase space of a dynamical system . It is a topological circle of equilibrium points and connecting heteroclinic orbits . If a heteroclinic cycle is asymptotically stable, approaching trajectories spend longer and longer periods of time in a neighbourhood of successive equilibria.
In generic dynamical systems heteroclinic connections are of high co-dimension, that is, they will not persist if parameters are varied.
A robust heteroclinic cycle is one which persists under small changes in the underlying dynamical system. Robust cycles often arise in the presence of symmetry or other constraints which force the existence of invariant hyperplanes. A prototypical example of a robust heteroclinic cycle is the Guckenheimer–Holmes cycle. This cycle has also been studied in the context of rotating convection, and as three competing species in population dynamics . | https://en.wikipedia.org/wiki/Heteroclinic_cycle |
In mathematics , a heteroclinic network is an invariant set in the phase space of a dynamical system . It can be thought of loosely as the union of more than one heteroclinic cycle . Heteroclinic networks arise naturally in a number of different types of applications, including fluid dynamics and populations dynamics.
The dynamics of trajectories near to heteroclinic networks is intermittent: trajectories spend a long time performing one type of behaviour (often, close to equilibrium), before switching rapidly to another type of behaviour. This type of intermittent switching behaviour has led to several different groups of researchers using them as a way to model and understand various type of neural dynamics.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heteroclinic_network |
In mathematics , in the phase portrait of a dynamical system , a heteroclinic orbit (sometimes called a heteroclinic connection ) is a path in phase space which joins two different equilibrium points . If the equilibrium points at the start and end of the orbit are the same, the orbit is a homoclinic orbit .
Consider the continuous dynamical system described by the ordinary differential equation x ˙ = f ( x ) . {\displaystyle {\dot {x}}=f(x).} Suppose there are equilibria at x = x 0 , x 1 . {\displaystyle x=x_{0},x_{1}.} Then a solution ϕ ( t ) {\displaystyle \phi (t)} is a heteroclinic orbit from x 0 {\displaystyle x_{0}} to x 1 {\displaystyle x_{1}} if both limits are satisfied: ϕ ( t ) → x 0 as t → − ∞ , ϕ ( t ) → x 1 as t → + ∞ . {\displaystyle {\begin{array}{rcl}\phi (t)\rightarrow x_{0}&{\text{as}}&t\rightarrow -\infty ,\\[4pt]\phi (t)\rightarrow x_{1}&{\text{as}}&t\rightarrow +\infty .\end{array}}}
This implies that the orbit is contained in the stable manifold of x 1 {\displaystyle x_{1}} and the unstable manifold of x 0 {\displaystyle x_{0}} .
By using the Markov partition , the long-time behaviour of hyperbolic system can be studied using the techniques of symbolic dynamics . In this case, a heteroclinic orbit has a particularly simple and clear representation. Suppose that S = { 1 , 2 , … , M } {\displaystyle S=\{1,2,\ldots ,M\}} is a finite set of M symbols. The dynamics of a point x is then represented by a bi-infinite string of symbols
A periodic point of the system is simply a recurring sequence of letters. A heteroclinic orbit is then the joining of two distinct periodic orbits. It may be written as
where p = t 1 t 2 ⋯ t k {\displaystyle p=t_{1}t_{2}\cdots t_{k}} is a sequence of symbols of length k , (of course, t i ∈ S {\displaystyle t_{i}\in S} ), and q = r 1 r 2 ⋯ r m {\displaystyle q=r_{1}r_{2}\cdots r_{m}} is another sequence of symbols, of length m (likewise, r i ∈ S {\displaystyle r_{i}\in S} ). The notation p ω {\displaystyle p^{\omega }} simply denotes the repetition of p an infinite number of times. Thus, a heteroclinic orbit can be understood as the transition from one periodic orbit to another. By contrast, a homoclinic orbit can be written as
with the intermediate sequence s 1 s 2 ⋯ s n {\displaystyle s_{1}s_{2}\cdots s_{n}} being non-empty, and, of course, not being p , as otherwise, the orbit would simply be p ω {\displaystyle p^{\omega }} . | https://en.wikipedia.org/wiki/Heteroclinic_orbit |
Heterocyclic amines , also sometimes referred to as HCA s, are chemical compounds containing at least one heterocyclic ring, which by definition has atoms of at least two different elements, as well as at least one amine (nitrogen-containing) group. Typically it is a nitrogen atom of an amine group that also makes the ring heterocyclic (e.g., pyridine ), though compounds exist in which this is not the case (e.g., the drug zileuton ). The biological functions of heterocyclic amines vary, including vitamins and carcinogens . Carcinogenic heterocyclic amines are created by high temperature cooking of meat and smoking of plant matter like tobacco . Some well known heterocyclic amines are niacin (vitamin B3), nicotine (psychoactive alkaloid and recreational drug), and the nucleobases that encode genetic information in DNA.
The compound pyrrolidine is composed of molecules that contain a saturated ring of five atoms. This cyclic structure is composed of one atom of nitrogen and four carbon. Nicotine is a molecule containing a pyrrolidine ring attached to a ring of pyridine (other heterocyclic amine). Nicotine belongs to a group of compounds known as alkaloids , which are naturally occurring organic compounds with nitrogen in them. Pyrrole is another compound made up of molecules with a five-membered heterocyclic ring. These molecules are unsaturated and contain a nitrogen atom in the ring. Four pyrrole rings are joined in a ring structure called a porphyrin .
The rings of porphyrin are components of hemoglobin , myoglobin , vitamin B12 , chlorophyll , and cytochromes . In the centers of heme in hemoglobin, myoglobin, and cytochromes, iron is an ion; in the first two, iron ion is bound to oxygen.
The structure of pyridine is similar to that of benzene except that a nitrogen atom replaces one carbon atom. Pyridine is used as a flavoring agent. The pyridine ring is part of two B vitamins: niacin and pyridoxine .
Niacin, also called nicotinic acid, is found in most organisms. Via metabolism, it becomes nicotinamide adenine dinucleotide NAD, a coenzyme which is involved in oxidation and reduction in metabolic cells. A deficiency of niacin leads to a disease called pellagra .
Pyridoxine or vitamin B6, it becomes a major compound in the metabolism of amino acids .
Pyrimidine is a heterocyclic amine that contains two nitrogen atoms in an unsaturated six-membered ring. An example of a molecule that contains pyrimidine is thiamine , which is also known as vitamin B1. Thiamine deficiency produces beriberi .
Pyrimidine is a component of the nucleobases cytosine, uracil, and thymine. The other two nucleobases, adenine and guanine , are also heterocyclic amines called purines ; they are composed of a fused pyrimidine and imidazole .
Some HCAs found in cooked and especially burned meat are known carcinogens . Research has shown that heterocyclic amine formation in meat occurs at high cooking temperatures. [ 1 ] Heterocyclic amines are the carcinogenic chemicals formed from the cooking of muscle meats such as beef , lamb , pork , fish and poultry. [ 1 ] [ 2 ] HCAs form when amino acids and creatine (a chemical found in muscles) react at high cooking temperatures. [ 1 ]
Colorectal cancer is associated with high intakes of HCAs found in meat cooked at high temperature. [ 3 ]
Six hours of marinating in beer or red wine cut levels of two types of HCA in beef steak by up to 90% compared with unmarinated steak. [ 4 ]
Harmane , a β-carboline alkaloid found in meats is "highly tremorogenic" (tremor inducing). [ 5 ] [ 6 ] While harmane has been found in roughly 50% higher concentrations in patients with essential tremor than in controls, [ 7 ] there is no direct correlation between blood-levels and levels of daily meat consumption, suggesting a difference in metabolism of this chemical plays a greater role. [ 6 ] These chemicals are formed during the cooking process of meat, particularly the longer they are cooked, and the more they are exposed to high temperatures during cooking. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Heterocyclic_amine |
Heterocyclic amines (HCAs) are a group of chemical compounds , many of which can be formed during cooking. They are found in meats that are cooked to the "well done" stage, in pan drippings and in meat surfaces that show a brown or black crust. Epidemiological studies show associations between intakes of heterocyclic amines and cancers of the colon , rectum , breast , prostate , pancreas , lung , stomach , and esophagus , and animal feeding experiments support a causal relationship. The U.S. Department of Health and Human Services Public Health Service labeled several heterocyclic amines as likely carcinogens in its 13th Report on Carcinogens . [ 1 ] Changes in cooking techniques reduce the level of heterocyclic amines.
More than 20 compounds fall into the category of heterocyclic amines. [ 2 ] Table 1 shows the chemical name and abbreviation of those most commonly studied.
All four of these compounds are included in the 13th Report on Carcinogens. [ 1 ]
The compounds found in food are formed when creatine (a non-protein amino acid found in muscle tissue), other amino acids and monosaccharides are heated together at high temperatures (125-300 °C or 275-572 °F) or cooked for long periods. HCAs form at the lower end of this range when the cooking time is long; at the higher end of the range, HCAs are formed within minutes. [ 4 ]
A review of 14 studies of HCA content in ground beef cooked under home conditions found in northern Europe and the U.S. found a range of values (Table 2). Because a standard U.S. serving of meat is 3 ounces, Table 2 includes a projection of the maximum amount of HCAs that could be found in a ground beef patty.
( n.d.= none detected )
Meat is a major component of American diets. Data from 1960 show the combined annual per capita consumption of beef, pork and chicken at 148 pounds; in 2004, that amount increased to 195 pounds a year. [ 5 ] Ground beef made up 42% of the beef market in 2000. Beef consumption, particularly ground and processed beef, is highest in households with incomes at or below 130 percent of the poverty level.
Patterns of beef intake by race/ethnicity show that non-Hispanic whites and Asians consumed the least amount of beef. Non-Hispanic African-Americans had the highest per capita intake of processed beef, ground beef and steaks compared to three other race/ethnicity groups. [ 5 ]
More than half of beef purchased in the U.S. comes from retail stores and is prepared at home. Ground beef makes up the highest per capita intakes of beef both at home and away from home.
Ground beef consumption is highest among males age 12-19 who consume on average 50 pounds per year per capita. The 12-19 age group showed the highest consumption of ground beef for females, but the amount (28.5 lbs) is much lower than that of males. [ 5 ]
US dietary exposure has been estimated at 1-17 ng/kg bodyweight per day. [ 6 ] Table 3 shows the average daily lifetime consumption of HCAs for subgroups of the U.S. population. [ 7 ] This analysis was based on the food intake data of 27215 people participating in the 1994 to 1996 Continuing Survey of Food Intakes by Individuals (CSFII) survey. Approximately 16 percent of HCA exposure came from hamburgers.
African American males had 50-100% higher intakes than white males and African American males consumed three times as many HCAs as white males (Table 4). [ 7 ]
HCA formation during cooking depends on the type of meat, cooking temperature, the degree of browning and the cooking time. Meats that are lower in fat and water content show higher concentrations of HCAs after cooking. More HCAs are formed when pan surface temperatures are higher than 220 °C (428 °F) such as with most frying or grilling. However, HCAs also form at lower temperatures when the cooking time is long, as in roasting. HCA concentrations are higher in browned or burned crusts that result from high temperature. [ 4 ] The pan drippings and meat bits that remain after meat is fried have high concentrations of HCAs. Beef, chicken and fish have higher concentrations than pork. Sausages are high in fat and water and show lower concentrations. [ 8 ]
Ground beef patties show lower levels of HCAs if they are flipped every minute until the target temperature is reached. [ 9 ] Beef patties cooked while frozen show no difference in HCA levels compared to room-temperature patties. [ 10 ]
After scientists discovered the carcinogenic components in cigarette smoke, they questioned whether carcinogens could also be found in smoked/burned foods, such as meats. [ 3 ] In 1977, cancer-causing compounds heterocyclic amines were discovered in food as a result of household cooking processes. [ 3 ] [ 11 ]
The most potent of the HCAs, MeIQ, is almost 24 times more carcinogenic than aflatoxin , a carcinogen produced by mold . [ 3 ]
Most of the 20 HCAs are more toxic than benzopyrene , a carcinogen found in cigarette smoke and coal tar . MeIQ, IQ and 8-MeIQx are the most potent mutagens according to the Ames test . [ 12 ] These HCAs are 100 times more potent carcinogens than PhIP , the compound most commonly found as a result of normal cooking. [ 12 ] [ 13 ]
HCAs contribute to the development of cancer by causing gene mutations, causing new cells to grow in an uncontrolled manner and form a tumor . Epidemiological studies linked consumption of well-done meats with increased risk of certain cancers, including cancer of the colon or rectum. [ 14 ] A review of research articles on meat consumption and colon cancer estimated that red meat consumption contributed to 7 to 9% of colon cancer cases in European men and women. [ citation needed ]
Long-term rat studies showed that PhIP causes cancer of the colon and mammary gland in rats. [ 13 ] Female rats given doses of 0, 12.4, 25, 50, 100 or 200 ppm of PhIP showed a dose-dependent incidence of adenocarcinomas . The offspring of female rats exposed to PhIP while pregnant had a higher prevalence of adenocarcinomas than those whose mothers had not been exposed. This was true even for offspring who were not exposed to PhIP. PhIP was transferred from mothers to offspring in their milk.
The effects of HCAs and well-done cooked meat on humans are less well established. Meat consumption, especially of well-done meat and meat cooked at a high temperature, can be used as an indirect measure of exposure to HCAs. A review of all research studies reported between 1996 and 2007 that examined relationships between HCAs, meat and cancer. [ 15 ] Twenty-two studies were found; of these, 18 showed a relationship between either meat intake or HCA exposure and some form of cancer. HCA exposure was measured in 10 of the studies and of those, 70% showed an association with cancer. The authors concluded that high intake of well-done meat and/or high exposure to certain HCAs may be associated with cancer of the colon, breast, prostate, pancreas, lung, stomach and esophagus.
A recent study found that the relative risk for colorectal cancer increased at intakes >41.4 ng/day. [ 16 ] Some evidence of increased relative risk occurred with intakes of MeIQx greater than or equal to 19.9 ng/day, but the trend was not as strong as for PhIP.
Recent studies had mixed results, finding no relationship between dietary heterocyclic amines and lung cancer in women who had never smoked, [ 17 ] no relationship between HCA intake and prostate cancer risk, [ 18 ] but suggesting a positive association between red meat, PhIP and bladder cancer [ 19 ] and increased risk of advanced prostate cancer with intakes of meat cooked at high temperatures. [ 20 ]
Although not all studies report an association between HCA and/or meat intake and cancers, the U.S. Department of Health and Human Services Public Health Service National Toxicology Program found sufficient evidence to label four HCAs as "reasonably anticipated to be a human carcinogen" in its twelfth Report on Carcinogens , published in 2011. The HCA known as IQ was first listed in the tenth report in 2002. MeIQ, MeIQx and PhIP were added to the list of anticipated carcinogens in 2004. [ 6 ] The Report on Carcinogens stated that MeIQ has been associated with rectal and colon cancer, MeIQx with lung cancer, IQ with breast cancer and PhIP with stomach and breast cancer. [ 6 ] However, no current federal guidelines focus on the recommended consumption limit of HCA levels in meat. [ 21 ] | https://en.wikipedia.org/wiki/Heterocyclic_amine_formation_in_meat |
In anatomy , a heterodont (from Greek , meaning 'different teeth') is an animal which possesses more than a single tooth morphology . [ 2 ] [ 3 ] Human dentition is heterodont and diphyodont as an example. [ 4 ]
In vertebrates, heterodont pertains to animals where teeth are differentiated into different forms. For example, members of the Synapsida generally possess incisors , canines ("dogteeth"), premolars , and molars . The presence of heterodont dentition is evidence of some degree of feeding and or hunting specialization in a species . In contrast, homodont or isodont dentition refers to a set of teeth that possess the same tooth morphology.
In invertebrates, the term heterodont refers to a condition where teeth of differing sizes occur in the hinge plate, a part of the Bivalvia . [ 2 ]
This animal anatomy –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heterodont |
Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. [ 1 ] Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. [ 1 ] [ 2 ] This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. [ 3 ] heteroduplexes are formed between different DNA alleles. [ 4 ] In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. [ 5 ] There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. [ 6 ] Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. [ 5 ] HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene. [ 6 ] | https://en.wikipedia.org/wiki/Heteroduplex_analysis |
Heterogamy is a term applied to a variety of distinct phenomena in different scientific domains. Usually having to do with some kind of difference, "hetero", in reproduction, "gamy". See below for more specific senses.
In reproductive biology, heterogamy is the alternation of differently organized generations, applied to the alternation between parthenogenetic and a sexual generation. [ 1 ] [ 2 ] This type of heterogamy occurs for example in some aphids .
Alternately, heterogamy or heterogamous is often used as a synonym of heterogametic , meaning the presence of two unlike chromosomes in a sex. [ 3 ] [ 4 ] For example, XY males and ZW females are called the heterogamous sex.
In cell biology , heterogamy is a synonym of anisogamy , the condition of having differently sized male and female gametes produced by different sexes or mating types in a species.
In botany , a plant is heterogamous when it carries at least two different types of flowers in regard to their reproductive structures, for example male and female flowers or bisexual and female flowers. Stamens and carpels are not regularly present in each flower or floret.
In sociology , heterogamy refers to a marriage between two individuals that differ in a certain criterion, and is contrasted with homogamy for a marriage or union between partners that match according to that criterion. For example, ethnic heterogamy refers to marriages involving individuals of different ethnic groups. Age heterogamy refers to marriages involving partners of significantly different ages. Heterogamy and homogamy are also used to describe marriage or union between people of unlike and like sex (or gender) respectively.
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heterogamy |
Heterogeneous catalysis is catalysis where the phase of catalysts differs from that of the reagents or products . [ 1 ] The process contrasts with homogeneous catalysis where the reagents, products and catalyst exist in the same phase. Phase distinguishes between not only solid , liquid , and gas components, but also immiscible mixtures (e.g., oil and water ), or anywhere an interface is present.
Heterogeneous catalysis typically involves solid phase catalysts and gas phase reactants. [ 2 ] In this case, there is a cycle of molecular adsorption, reaction, and desorption occurring at the catalyst surface. Thermodynamics, mass transfer, and heat transfer influence the rate (kinetics) of reaction .
Heterogeneous catalysis is very important because it enables faster, large-scale production and the selective product formation. [ 3 ] Approximately 35% of the world's GDP is influenced by catalysis. [ 4 ] The production of 90% of chemicals (by volume) is assisted by solid catalysts. [ 2 ] The chemical and energy industries rely heavily on heterogeneous catalysis. For example, the Haber–Bosch process uses metal-based catalysts in the synthesis of ammonia , an important component in fertilizer; 144 million tons of ammonia were produced in 2016. [ 5 ]
Adsorption is an essential step in heterogeneous catalysis. Adsorption is the process by which a gas (or solution) phase molecule (the adsorbate) binds to solid (or liquid) surface atoms (the adsorbent). The reverse of adsorption is desorption , the adsorbate splitting from adsorbent. In a reaction facilitated by heterogeneous catalysis, the catalyst is the adsorbent and the reactants are the adsorbate.
Two types of adsorption are recognized: physisorption , weakly bound adsorption, and chemisorption , strongly bound adsorption. Many processes in heterogeneous catalysis lie between the two extremes. The Lennard-Jones model provides a basic framework for predicting molecular interactions as a function of atomic separation. [ 6 ]
In physisorption, a molecule becomes attracted to the surface atoms via van der Waals forces . These include dipole-dipole interactions, induced dipole interactions, and London dispersion forces. Note that no chemical bonds are formed between adsorbate and adsorbent, and their electronic states remain relatively unperturbed. Typical energies for physisorption are from 3 to 10 kcal/mol. [ 2 ] In heterogeneous catalysis, when a reactant molecule physisorbs to a catalyst, it is commonly said to be in a precursor state, an intermediate energy state before chemisorption, a more strongly bound adsorption. [ 6 ] From the precursor state, a molecule can either undergo chemisorption, desorption, or migration across the surface. [ 7 ] The nature of the precursor state can influence the reaction kinetics. [ 7 ]
When a molecule approaches close enough to surface atoms such that their electron clouds overlap, chemisorption can occur. In chemisorption, the adsorbate and adsorbent share electrons signifying the formation of chemical bonds . Typical energies for chemisorption range from 20 to 100 kcal/mol. [ 2 ] Two cases of chemisorption are:
Most metal surface reactions occur by chain propagation in which catalytic intermediates are cyclically produced and consumed. [ 8 ] Two main mechanisms for surface reactions can be described for A + B → C. [ 2 ]
Most heterogeneously catalyzed reactions are described by the Langmuir–Hinshelwood model. [ 9 ]
In heterogeneous catalysis, reactants diffuse from the bulk fluid phase to adsorb to the catalyst surface. The adsorption site is not always an active catalyst site, so reactant molecules must migrate across the surface to an active site. At the active site, reactant molecules will react to form product molecule(s) by following a more energetically facile path through catalytic intermediates (see figure to the right). The product molecules then desorb from the surface and diffuse away. The catalyst itself remains intact and free to mediate further reactions. Transport phenomena such as heat and mass transfer, also play a role in the observed reaction rate.
Catalysts are not active towards reactants across their entire surface; only specific locations possess catalytic activity, called active sites . The surface area of a solid catalyst has a strong influence on the number of available active sites. In industrial practice, solid catalysts are often porous to maximize surface area, commonly achieving 50–400 m 2 /g. [ 2 ] Some mesoporous silicates , such as the MCM-41, have surface areas greater than 1000 m 2 /g. [ 10 ] Porous materials are cost effective due to their high surface area-to-mass ratio and enhanced catalytic activity.
In many cases, a solid catalyst is dispersed on a supporting material to increase surface area (spread the number of active sites) and provide stability. [ 2 ] Usually catalyst supports are inert, high melting point materials, but they can also be catalytic themselves. Most catalyst supports are porous (frequently carbon, silica, zeolite, or alumina-based) [ 4 ] and chosen for their high surface area-to-mass ratio. For a given reaction, porous supports must be selected such that reactants and products can enter and exit the material.
Often, substances are intentionally added to the reaction feed or on the catalyst to influence catalytic activity, selectivity, and/or stability. These compounds are called promoters. For example, alumina (Al 2 O 3 ) is added during ammonia synthesis to providing greater stability by slowing sintering processes on the Fe-catalyst. [ 2 ]
Sabatier principle can be considered one of the cornerstones of modern theory of catalysis. [ 11 ] Sabatier principle states that the surface-adsorbates interaction has to be an optimal amount: not too weak to be inert toward the reactants and not too strong to poison the surface and avoid desorption of the products. [ 12 ] The statement that the surface-adsorbate interaction has to be an optimum, is a qualitative one. Usually the number of adsorbates and transition states associated with a chemical reaction is a large number, thus the optimum has to be found in a many-dimensional space. Catalyst design in such a many-dimensional space is not a computationally viable task. Additionally, such optimization process would be far from intuitive. Scaling relations are used to decrease the dimensionality of the space of catalyst design. [ 13 ] Such relations are correlations among adsorbates binding energies (or among adsorbate binding energies and transition states also known as BEP relations ) [ 14 ] that are "similar enough" e.g., OH versus OOH scaling. [ 15 ] Applying scaling relations to the catalyst design problems greatly reduces the space dimensionality (sometimes to as small as 1 or 2). [ 16 ] One can also use micro-kinetic modeling based on such scaling relations to take into account the kinetics associated with adsorption, reaction and desorption of molecules under specific pressure or temperature conditions. [ 17 ] Such modeling then leads to well-known volcano-plots at which the optimum qualitatively described by the Sabatier principle is referred to as the "top of the volcano". Scaling relations can be used not only to connect the energetics of radical surface-adsorbed groups (e.g., O*,OH*), [ 13 ] but also to connect the energetics of closed-shell molecules among each other or to the counterpart radical adsorbates. [ 18 ] A recent challenge for researchers in catalytic sciences is to "break" the scaling relations. [ 19 ] The correlations which are manifested in the scaling relations confine the catalyst design space, preventing one from reaching the "top of the volcano". Breaking scaling relations can refer to either designing surfaces or motifs that do not follow a scaling relation, or ones that follow a different scaling relation (than the usual relation for the associated adsorbates) in the right direction: one that can get us closer to the top of the reactivity volcano. [ 16 ] In addition to studying catalytic reactivity, scaling relations can be used to study and screen materials for selectivity toward a special product. [ 20 ] There are special combination of binding energies that favor specific products over the others. Sometimes a set of binding energies that can change the selectivity toward a specific product "scale" with each other, thus to improve the selectivity one has to break some scaling relations; an example of this is the scaling between methane and methanol oxidative activation energies that leads to the lack of selectivity in direct conversion of methane to methanol. [ 21 ]
Catalyst deactivation is defined as a loss in catalytic activity and/or selectivity over time.
Substances that decrease the reaction rate are called poisons . Poisons chemisorb to the catalyst surface and reduce the number of available active sites for reactant molecules to bind to. [ 22 ] Common poisons include Group V, VI, and VII elements (e.g. S, O, P, Cl), some toxic metals (e.g. As, Pb), and adsorbing species with multiple bonds (e.g. CO, unsaturated hydrocarbons). [ 6 ] [ 22 ] For example, sulfur disrupts the production of methanol by poisoning the Cu/ZnO catalyst. [ 23 ] Substances that increase reaction rate are called promoters . For example, the presence of alkali metals in ammonia synthesis increases the rate of N 2 dissociation. [ 23 ]
The presence of poisons and promoters can alter the activation energy of the rate-limiting step and affect a catalyst's selectivity for the formation of certain products. Depending on the amount, a substance can be favorable or unfavorable for a chemical process. For example, in the production of ethylene, a small amount of chemisorbed chlorine will act as a promoter by improving Ag-catalyst selectivity towards ethylene over CO 2 , while too much chlorine will act as a poison. [ 6 ]
Other mechanisms for catalyst deactivation include:
In industry, catalyst deactivation costs billions every year due to process shutdown and catalyst replacement. [ 22 ]
In industry, many design variables must be considered including reactor and catalyst design across multiple scales ranging from the subnanometer to tens of meters. The conventional heterogeneous catalysis reactors include batch , continuous , and fluidized-bed reactors , while more recent setups include fixed-bed, microchannel, and multi-functional reactors . [ 6 ] Other variables to consider are reactor dimensions, surface area, catalyst type, catalyst support, as well as reactor operating conditions such as temperature, pressure, and reactant concentrations.
Some large-scale industrial processes incorporating heterogeneous catalysts are listed below. [ 4 ]
Although the majority of heterogeneous catalysts are solids, there are a few variations which are of practical value. For two immiscible solutions (liquids), one carries the catalyst while the other carries the reactant. This set up is the basis of biphasic catalysis as implemented in the industrial production of butyraldehyde by the hydroformylation of propylene. [ 31 ] | https://en.wikipedia.org/wiki/Heterogeneous_catalysis |
Heterogeneous catalytic reactors put emphasis on catalyst effectiveness factors and the heat and mass transfer implications. Heterogeneous catalytic reactors are among the most commonly utilized chemical reactors in the chemical engineering industry.
Heterogenous catalytic reactors are commonly classified by the relative motion of the catalyst particles.
A fixed bed reactor is a cylindrical tube filled with catalyst pellets with reactants flowing through the bed and being converted into products. The catalyst may have multiple configuration including: one large bed, several horizontal beds, several parallel packed tubes, multiple beds in their own shells. The various configurations may be adapted depending on the need to maintain temperature control within the system. Serial connection of two reactors with option to dose oxidant between the stages enable under optimal conditions to increase the product yield in oxidation catalysis. [ 1 ] By dosing intermediates or products between the stages, valuable information could be found concerning the reaction pathways.
The catalyst pellets may be spherical, cylindrical, or randomly shaped pellets. They range from 0.25 cm to 1.0 cm in diameter. The flow of a fixed bed reactor is typically downward. Packed bed reactor .
A trickle-bed reactor is a fixed bed where liquid flows without filling the spaces between particles. Like with the fixed bed reactors, the liquid typically flows downward. At the same time, gas is flowing upward. The primary use for trickle-bed reactors is hydrotreatment reactions ( hydrodesulfurization and hydrodemetalation of heavy crude oil, [ 2 ] hydrodeasphaltenization of coal tar [ 3 ] ). This reactor is often utilized in order to handle feeds with extremely high boiling points..
A moving bed reactor has a fluid phase that passes up through a packed bed. Solid is fed into the top of the reactor and moves down. It is removed at the bottom. Moving bed reactors require special control valves to maintain close control of the solids. For this reason, moving bed reactors are less frequently used than the above two reactors. Moving bed reactors are most suitable for solid content below 10% and is generally used where the solids (primarily catalyst) have high surface area due to its size in microns.
A rotating bed reactor (RBR) holds a packed bed fixed within a basket with a central hole. When the basket is spinning immersed in a fluid phase, the inertia forces created by the spinning motion forces the fluid outwards, thereby creating a circulating flow through the rotating packed bed. The rotating bed reactor is a rather new invention that shows high rates of mass transfer and good fluid mixing. RBR type reactors have frequently been applied in high-value biocatalysis reactions, there offering convenient reuse of immobilized enzymes [ 4 ] while preventing mechanical damage of the solid-phase catalysts. [ 5 ] RBR constructions are also emerging in the nuclear energy industry to purify liquid waste on the scale of 100's of cubic meters. [ 6 ]
A fluidized bed reactor suspends small particles of catalyst by the upward motion of the fluid to be reacted. The fluid is typically a gas with a flow rate high enough to mix the particles without carrying them out of the reactor. The particles are much smaller than those for the above reactors. Typically on the scale of 10-300 microns. One key advantage of using a fluidized bed reactor is the ability to achieve a highly uniform temperature in the reactor. The fluidized bed reactors are best for bio-catalysts or enzymes doped on solids since the solid are fluidized by the working fluid and there is no mechanical impact on the solids.
A slurry reactor contains the catalyst in a powdered or granular form. [ 7 ] This reactor is typically used when one reactant is a gas and the other a liquid while the catalyst is a solid. The reactant gas is put through the liquid and dissolved. It then diffuses onto the catalyst surface.
Slurry reactors can use very fine particles and this can lead to problems of separation of catalyst from the liquid.
Trickle-bed reactors don't have this problem and this is a big advantage of trickle-bed reactor. Unfortunately these large particles in trickle bed means much lower reaction rate.
Overall, the trickle bed is simpler, the slurry reactors usually has a high reaction rate and the fluidized bed is somewhat in-between. | https://en.wikipedia.org/wiki/Heterogeneous_catalytic_reactor |
Heterogeneous combustion , otherwise known as combustion in porous media , is a type of combustion in which a solid and gas phase interact to promote the complete transfer of reactants to their lower energy potential products. In this type of combustion a high surface area solid is immersed into a gaseous reacting flow, additional fluid phases may or may not be present. Chemical reactions and heat transfer occur locally on each phase and between both phases. Heterogeneous Combustion differs from catalysis as there is no focus to either phase individually but rather both examined simultaneously. In some materials, such as silicon carbide (SiC), oxide layers, SiO and SiO 2 , which form on the surface enable the adsorption of water vapor from the gas phase onto the solid lowering partial pressures. [ 1 ] In this regime of combustion, thermal heat released from the combustion byproducts are transferred into the solid phase by convection ; conduction and radiation both then conduct heat upstream (along with adverse convection within the gas phase). Heat is then convectively transferred to the unburnt reactants. [ 2 ]
Within the literature, there many applications of heterogeneous combustion which are derived from the unique manner in which this combustion process recirculates heat. These devices may be utilized as either stand alone devices, or in conjunction with other means of energy conversion for highly efficient combined heat and power (CHP) applications. For example, electricity production via both radiative and convective heat exchange with the combustion chamber can be accomplished using Organic Rankine Cycles in a multi step heating process, [ 1 ] or using strictly radiative emissions via photovoltaic and thermionic generators. [ 1 ] Heterogeneous combustors may be utilized for small-scale heating purposes, [ 3 ] and as oxidizers of volatile organic compounds (VOCs). [ 4 ] Heterogeneous combustion may also be combined in series and parallel with multiple injection stages for use in gas flares at chemical manufacturing plants or oil wells. [ 1 ]
Within a combustion chamber containing porous media, structure of the environment can be assumed as follows. A preheating region exists prior to the surface of the flame front denoted by δ p . Preheating length is marked by the beginning of the porous solid where appreciable heat transfer to the gas phase occurs and ends when the solid and gas phase reach equilibrium temperature. The region of chemical heat release, the flame, whose thickness can be given as δ L , exists following the preheat region and its length is dependent upon mass flux, surface properties, and equivalence ratio. Beyond the flame, where minimal chemical heat release occurs, heat is convectively transferred from the post combustion gases into the solid. Heat then conducts and radiates through the solid structure upstream through the flame. Within the preheating region, heat is again convectively transferred from the solid structure to the gas. [ 5 ]
The flame structure inside the porous matrix has been imaged by using X-ray absorption. [ 6 ] To evaluate the temperature within the gas phase, the reacting mixture was diluted with Krypton: an inert gas that has a large X-ray absorption coefficient. [ 7 ] | https://en.wikipedia.org/wiki/Heterogeneous_combustion |
Heterogeneous gold catalysis refers to the use of elemental gold as a heterogeneous catalyst . As in most heterogeneous catalysis, the metal is typically supported on metal oxide. Furthermore, as seen in other heterogeneous catalysts, activity increases with a decreasing diameter of supported gold clusters. Several industrially relevant processes are also observed such as H 2 activation, Water-gas shift reaction , and hydrogenation . [ 1 ] [ 2 ] [ 3 ] One or two gold-catalyzed reactions may have been commercialized. [ 4 ]
The high activity of supported gold clusters has been proposed to arise from a combination of structural changes, quantum-size effects and support effects that preferentially tune the electronic structure of gold [ 5 ] such that optimal binding of adsorbates during the catalytic cycle is enabled. [ 2 ] [ 3 ] [ 6 ] The selectivity and activity of gold nanoparticles can be finely tuned by varying the choice of support material, with e.g. titania (TiO 2 ), hematite (α-Fe 2 O 3 ), cobalt(II/III) oxide (Co 3 O 4 ) and nickel(II) oxide (NiO) serving as the most effective support materials for facilitating the catalysis of CO combustion. [ 1 ] Besides enabling an optimal dispersion of the nanoclusters, the support materials have been suggested to promote catalysis by altering the size, shape, strain and charge state of the cluster. [ 3 ] [ 7 ] [ 8 ] A precise shape control of the deposited gold clusters has been shown to be important for optimizing the catalytic activity, with hemispherical, few atomic layers thick nanoparticles generally exhibiting the most desirable catalytic properties due to maximized number of high-energy edge and corner sites. [ 1 ] [ 6 ] [ 9 ]
In the past, heterogeneous gold catalysts have found preliminary commercial applications for the industrial production of vinyl chloride (precursor to polyvinyl chloride or PVC) and methyl methacrylate . [ 4 ] Traditionally, PVC production uses mercury catalysts and leads to serious environmental concerns. China accounts for 50% of world's mercury emissions and 60% of China's mercury emission is caused by PVC production. Although gold catalysts are slightly expensive, overall production cost is affected by only ~1%. Therefore, green gold catalysis is considered valuable. The price fluctuation in gold has later led to cease the operations based on their use in catalytic converters. Very recently, there has been a lot of developments in gold catalysis for the synthesis of organic molecules including the C-C bond forming homocoupling or cross-coupling reactions and it has been speculated that some of these catalysts could find applications in various fields. [ 10 ]
Gold can be a very active catalyst in oxidation of carbon monoxide (CO), i.e. the reaction of CO with molecular oxygen to produce carbon dioxide (CO 2 ). Particles of 2 to 5 nm exhibit high catalytic activities. Supported gold clusters , thin films and nanoparticles are one to two orders of magnitude more active than atomically dispersed gold cations or unsupported metallic gold. [ 2 ]
Gold cations can be dispersed atomically on basic metal oxide supports such as MgO and La 2 O 3 . Monovalent and trivalent gold cations have been identified, the latter being more active but less stable than the former. The turnover frequency (TOF) of CO oxidation on these cationic gold catalysts is in the order of magnitude of 0.01 s −1 , exhibiting the very high activation energy of 138 kJ/mol. [ 2 ]
Supported gold nanoclusters with a diameter < 2 nm are active to CO oxidation with turnover number (TOF) in the order of magnitude of 0.1 s −1 . It has been observed that clusters with 8 to 100 atoms are catalytically active. The reason is that, on one hand, eight atoms are the minimum necessary to form a stable, discrete energy band structure , and on the other hand, d-band splitting decreases in clusters with more than 100 atoms, resembling the bulk electronic structure. The support has a substantial effect on the electronic structure of gold clusters. Metal hydroxide supports such as Be(OH) 2 , Mg(OH) 2 , and La(OH) 3 , with gold clusters of < 1.5 nm in diameter constitute highly active catalysts for CO oxidation at 200 K (-73 °C). By means of techniques such as HR-TEM and EXAFS , it has been proven that the activity of these catalysts is due exclusively to clusters with 13 atoms arranged in an icosahedron structure. Furthermore, the metal loading should exceed 10 wt% for the catalysts to be active. [ 2 ]
Gold nanoparticles in the size range of 2 to 5 nm catalyze CO oxidation with a TOF of about 1 s −1 at temperatures below 273 K (0 °C). The catalytic activity of nanoparticles is brought about in the absence of moisture when the support is semiconductive or reducible , e.g. TiO 2 , MnO 2 , Fe 2 O 3 , ZnO , ZrO 2 , or CeO 2 . However, when the support is insulating or non-reducible, e.g. Al 2 O 3 and SiO 2 , a moisture level > 5000 ppm is required for activity at room temperature. In the case of powder catalysts prepared by wet methods, the surface OH − groups on the support provide sufficient aid as co-catalysts, so that no additional moisture is necessary. At temperatures above 333 K (60 °C), no water is needed at all. [ 2 ]
The apparent activation energy of CO oxidation on supported gold powder catalysts prepared by wet methods is 2-3 kJ/mol above 333 K (60 °C) and 26-34 kJ/mol below 333 K. These energies are low, compared to the values displayed by other noble metal catalysts (80-120 kJ/mol). The change in activation energy at 333 K can be ascribed to a change in reaction mechanism. This explanation has been supported experimentally. At 400 K (127 °C), the reaction rate per surface Au atom is not dependent on particle diameter, but the reaction rate per perimeter Au atom is directly proportional to particle diameter. This suggests that the mechanism above 333 K takes place on the gold surfaces. By contrast, at 300 K (27 °C), the reaction rate per surface Au atom is inversely proportional to particle diameter, while the rate per perimeter interface does not depend on particle size. Hence, CO oxidation occurs on the perimeter sites at room temperature. Further information on the reaction mechanism has been revealed by studying the dependency of the reaction rate on the partial pressures of the reactive species. Both at 300 K and 400 K, there is a first order rate dependency on CO partial pressure up to 4 Torr (533 Pa), above which the reaction is zero order. With respect to O 2 , the reaction is zero order above 10 Torr (54.7 kPa) at both 300 and 400 K. The order with respect to O 2 at lower partial pressures is 1 at 300 K and 0.5 at 400 K. The shift towards zero order indicates that the catalyst's active sites are saturated with the species in question. Hence, a Langmuir-Hinshelwood mechanism has been proposed, in which CO adsorbed on gold surfaces reacts with O adsorbed at the edge sites of the gold nanoparticles. [ 2 ]
The need to use oxide supports, and more specifically reducible supports, is due to their ability to activate dioxygen . Gold nanoparticles supported on inert materials such as carbon or polymers have been proven inactive in CO oxidation. The aforementioned dependency of some catalysts on water or moisture also relates to oxygen activation. The ability of certain reducible oxides, such as MnO 2 , Co 3 O 4 , and NiO to activate oxygen in dry conditions (< 0.1 ppm H 2 O) can be ascribed to the formation of oxygen defects during pretreatment. [ 2 ]
Water gas shift is the most widespread industrial process for the production of dihydrogen , H 2 . It involves the reaction of carbon monoxide and water ( syngas ) to form hydrogen and carbon dioxide as a byproduct. In many catalytic reaction schemes, one of the elementary reactions is the oxidation of CO with an adsorbed oxygen species. Gold catalysts have been proposed as an alternative for water gas shift at low temperatures, viz. < 523 K (250 °C). This technology is essential to the development of solid oxide fuel cells . Hematite has been found to be an appropriate catalyst support for this purpose. Furthermore, a bimetallic Au- Ru /Fe 2 O 3 catalyst has been proven highly active and stable for low-temperature water gas shift. Titania and ceria have also been used as supports for effective catalysts. Unfortunately, Au/ CeO 2 is prone to deactivation caused by surface-bound carbonate or formate species. [ 12 ]
Although gold catalysts are active at room temperature to CO oxidation, the high amounts of water involved in water gas shift require higher temperatures. At such temperatures, gold is fully reduced to its metallic form. However, the activity of e.g. Au/CeO 2 has been enhanced by CN − treatment, whereby metallic gold is leached, leaving behind highly active cations. According to DFT calculations, the presence of such Au cations on the catalyst is allowed by empty, localized nonbonding f states in CeO 2 . On the other hand, STEM studies of Au/CeO 2 have revealed nanoparticles of 3 nm in diameter. Water gas shift has been proposed to occur at the interface of Au nanoparticles and the reduced CeO 2 support. [ 12 ]
Although the epoxidation of ethylene is routinely achieved in the industry with selectivities as high as 90% on Ag catalysts, most catalysts provided < 10% selectivity for propylene epoxidation. Using a gold catalyst supported on titanium silicate-1 (TS-1) molecular sieve , yields of 350 g/h per gram of gold were obtained at 473 K (200 °C). The reaction took place in the gas phase. Furthermore, using mesoporous titanosilicate supports (Ti- MCM -41 and Ti- MCM -48), gold catalysts provided > 90% selectivity at ~ 7% propylene conversion , 40% H 2 efficiency, and 433 K (160 °C). The active species in these catalysts were identified to be hemispherical gold nano-crystals of less than 2 nm in diameter in intimate contact with the support. [ 12 ]
Alkene epoxidation has been demonstrated in absence of H 2 reductant in the liquid phase. For example, using 1% Au/ graphite , ~80% selectivities of cis-cyclooctene to cyclooctene oxide (analogous to cyclohexene oxide ) were obtained at 7-8% conversion, 353 K (80 °C), and 3 MPa O 2 in absence of hydrogen or solvent. [ 12 ] Other liquid-phase selective oxidations have been achieved with saturated hydrocarbons. For instance, cyclohexane has been converted to cyclohexanone and cyclohexanol with a combined selectivity of ~100% on gold catalysts. Product selectivities can be tuned in liquid phase reactions by the presence or absence of solvent and by the nature of the latter, viz. water, polar , or nonpolar . With gold catalysts, the catalyst's support has less influence on reactions in the liquid phase than on reactions in the gas phase. [ 13 ]
Typical hydrogenation catalysts are based on metals from the 8 , 9 , and 10 groups, such as Ni , Ru , Pd , and Pt . By comparison, gold has a poor catalytic activity for hydrogenation. [ 14 ] This low activity is caused by the difficulty of dihydrogen activation on gold. While hydrogen dissociates on Pd and Pt without an energy barrier , dissociation on Au( 111 ) has an energy barrier of ~1.3 eV , according to DFT calculations. These calculations agree with experimental studies, in which hydrogen dissociation was not observed on gold ( 111 ) or ( 110 ) terraces, nor on ( 331 ) steps. No dissociation was observed on these surfaces either at room temperature or at 473 K (200 °C). However, the rate of hydrogen activation increases for Au nanoparticles. [ 2 ] Notwithstanding its poor activity, nano-sized gold immobilized in various supports has been found to provide a good selectivity in hydrogenation reactions. [ 14 ]
One of the early studies (1966) of hydrogenation on supported, highly dispersed gold was performed with 1-butene and cyclohexene in the gas phase at 383 K (110 °C). The reaction rate was found to be first order with respect to alkene pressure and second order with respect to chemisorbed hydrogen. In later works, it was shown that gold-catalyzed hydrogenation can be highly sensitive to Au loading (hence to particle size) and to the nature of the support. For example, 1-pentene hydrogenation occurred optimally on 0.04 wt% Au/ SiO 2 , but not at all on Au/ γ-Al 2 O 3 . [ 12 ] By contrast, the hydrogenation of 1,3-butadiene to 1-butene was shown to be relatively insensitive to Au particle size in a study with a series of Au/Al 2 O 3 catalysts prepared by different methods. With all the tested catalysts, conversion was ~100% and selectivity, < 60%. [ 14 ] Concerning reaction mechanisms, in a study of propylene hydrogenation on Au/SiO 2 , reaction rates were determined using D 2 and H 2 . Because the reaction with deuterium was substantially slower, it was suggested that the rate-determining step in alkene hydrogenation was the cleavage of the H-H bond. Lastly, ethylene hydrogenation was studied on Au/ MgO at atmospheric pressure and 353 K (80 °C) with EXAFS , XANES and IR spectroscopy , suggesting that the active species might be Au +3 and the reaction intermediate , an ethylgold species. [ 12 ]
Gold catalysts are especially selective in the hydrogenation of α,β-insaturated aldehydes, i.e. aldehydes containing a C=C double bond on the carbon adjacent to the carbonyl . Gold catalysts are able to hydrogenate only the carbonyl group, so that the aldehyde is transformed to the corresponding alcohol , while leaving the C=C double bond untouched. In the hydrogenation of crotonaldehyde to crotyl alcohol , 80% selectivity was attained at 5-10% conversion and 523 K (250 °C) on Au/ ZrO 2 and Au/ ZnO . The selectivity increased along with Au particle size in the range of ~2 to ~5 nm. Other instances of this reaction include acrolein , citral , benzal acetone , and pent-3-en-2-one. The activity and selectivity of gold catalysts for this reaction has been linked to the morphology of the nanoparticles, which in turn is influenced by the support. For example, round particles tend to form on TiO 2 , while ZnO promotes particles with clear facets, as observed by TEM . Because the round morphology provides a higher relative amount of low- coordinated metal surface sites, the higher activity observed with Au/TiO 2 compared to Au/ZnO is explained. Finally, a bimetallic Au- In /ZnO catalyst has been observed to improve the selectivity towards the hydrogenation of the carbonyl in acrolein. It was observed in HRTEM images that indium thin films decorate some of the facets of the gold nanoparticle. The promoting effect on selectivity might result from the fact that only the Au sites that promote side-reactions are decorated by In. [ 12 ]
A strategy that in many reactions has succeeded at improving gold's catalytic activity without impairing its selectivity is to synthesize bimetallic Pd -Au or Pt -Au catalysts. For the hydrogenation of 1,3-butadiene to butenes , model surfaces of Au( 111 ), Pd-Au( 111 ), Pd-Au( 110 ), and Pd( 111 ) were studied with LEED , AES , and LEIS . A selectivity of ~100% was achieved on Pd 70 Au 30 ( 111 ) and it was suggested that Au might promote the desorption of the product during the reaction. A second instance is the hydrogenation of p -chloronitrobenzene to p -chloroaniline , in which selectivity suffers with typical hydrogenation catalysts due to the parallel hydrodechlorination to aniline . However, Pd-Au/Al 2 O 3 (Au/Pd ≥20) has been proven thrice as active as the pure Au catalyst, while being ~100% selective to p -chloroaniline. In a mechanistic study of hydrogenation of nitrobenzenes with Pt-Au/TiO 2 , the dissociation of H 2 was identified as rate-controlling , hence the incorporation of Pt, an efficient hydrogenation metal, highly improved catalytic activity. Dihydrogen dissociated on Pt and the nitroaromatic compound was activated on the Au-TiO 2 interface. Finally, hydrogenation was enabled by the spillover of activated H surface species from Pt to the Au surface. [ 14 ] [ 15 ]
Bulk metallic gold is known to be inert, exhibiting a surface reactivity at room temperature only towards a few substances such as formic acid and sulphur-containing compounds, e.g. H 2 S and thiols . [ 1 ] Within heterogeneous catalysis, reactants adsorb onto the surface of the catalyst thus forming activated intermediates. However, if the adsorption is weak such as in the case of bulk gold, a sufficient perturbation of the reactant electronic structure does not occur and catalysis is hindered ( Sabatier's principle ). When gold is deposited as nanosized clusters of less than 5 nm onto metal oxide supports, a markedly increased interaction with adsorbates is observed, thereby resulting in surprising catalytic activities. Evidently, nano-scaling and dispersing gold on metal oxide substrates makes gold less noble by tuning its electronic structure, but the precise mechanisms underlying this phenomenon are as of yet uncertain and hence widely studied. [ 3 ] [ 13 ] [ 16 ]
It is generally known that decreasing the size of metallic particles in some dimension to the nanometer scale will yield clusters with a significantly more discrete electronic band structure in comparison with the bulk material. [ 9 ] This is an example of a quantum-size effect and has been previously correlated with an increased reactivity enabling nanoparticles to bind gas phase molecules more strongly. In the case of TiO 2 -supported gold nanoparticles, Valden et al. [ 2 ] observed the opening of a band gap of approximately 0.2-0.6 eV in the gold electronic structure as the thickness of the deposited particles was decreased below three atomic layers. The two-layer thick supported gold clusters were also shown to be exceptionally active for CO combustion, based on which it was concluded that quantum-size effects inducing a metal-insulator transition play a key role in enhancing the catalytic properties of gold. However, decreasing the size further to a single atomic layer and a diameter of less than 3 nm was reported to again decrease the activity. This has later been explained by a destabilization of clusters composed of very few atoms, resulting in too strong bonding of adsorbates and thus poisoning of the catalyst. [ 3 ] [ 8 ]
The properties of the metal d-band are central for describing the origin of catalytic activity based on electronic effects. [ 17 ] According to the d-band model of heterogeneous catalysis, substrate-adsorbate bonds are formed as the discrete energy levels of the adsorbate molecule interacts with the metal d-band, thus forming bonding and antibonding orbitals. The strength of the formed bond depends on the position of the d-band center such that a d-band closer to the Fermi level ( E F {\displaystyle E_{\mathrm {F} }} ) will result in stronger interaction. The d-band center of bulk gold is located far below E F {\displaystyle E_{\mathrm {F} }} , which qualitatively explains the observed weak binding of adsorbates as both the bonding and antibonding orbitals formed upon adsorption will be occupied, resulting in no net bonding. [ 17 ] However, as the size of gold clusters is decreased below 5 nm, it has been shown that the d-band center of gold shifts to energies closer to the Fermi level, such that the as formed antibonding orbital will be pushed to an energy above E F {\displaystyle E_{\mathrm {F} }} , hence reducing its filling. [ 18 ] [ 19 ] In addition to a shift in the d-band center of gold clusters, the size-dependency of the d-band width as well as the 5 d 3 / 2 - d 5 / 2 {\displaystyle 5d_{3/2}{\text{-}}d_{5/2}} spin-orbit splitting has been studied from the viewpoint of catalytic activity. [ 20 ] As the size of the gold clusters is decreased below 150 atoms (diameter ca. 2.5 nm), rapid drops in both values occur. This can be attributed to d-band narrowing due to the decreased number of hybridizing valence states of small clusters as well as to the increased ratio of high-energy edge atoms with low coordination to the total number of Au atoms. The effect of the decreased 5 d 3 / 2 - d 5 / 2 {\displaystyle 5d_{3/2}{\text{-}}d_{5/2}} spin-orbit splitting as well as the narrower distribution of d-band states on the catalytic properties of gold clusters cannot be understood via simple qualitative arguments as in the case of the d-band center model. Nevertheless, the observed trends provide further evidence that a significant perturbation of the Au electronic structure occurs upon nanoscaling, which is likely to play a key role in the enhancement of the catalytic properties of gold nanoparticles.
A central structural argument explaining the high activity of metal oxide supported gold clusters is based on the concept of periphery sites formed at the junction between the gold cluster and the substrate. [ 1 ] [ 2 ] In the case of CO oxidation, it has been hypothesized that CO adsorbs onto the edges and corners of the gold clusters, while the activation of oxygen occurs at the peripheral sites. The high activity of edge and corner sites towards adsorption can be understood by considering the high coordinative unsaturation of these atoms in comparison with terrace atoms. The low degree of coordination increases the surface energy of corner and edge sites, hence making them more active towards binding adsorbates. This is further coupled with the local shift of the d-band center of the unsaturated Au atoms towards energies closer to the Fermi level, which in accordance with the d-band model results in increased substrate-adsorbate interaction and lowering of the adsorption-dissociation energy barriers. [ 17 ] [ 20 ] Lopez et al. [ 18 ] calculated the adsorption energy of CO and O 2 on the Au( 111 ) terrace on which the Au-atoms have a coordination number of 9 as well as on an Au 10 cluster where the most reactive sites have a coordination of 4. They observed that the bond strengths are in general increased by as much as 1 eV, indicating a significant activation towards CO oxidation if one assumes that the activation barriers of surface reactions scale linearly with the adsorption energies ( Brønsted-Evans-Polanyi principle ). The observation that hemispherical two-layer gold clusters with a diameter of a few nanometers are most active for CO oxidation is well in line with the assumption that edge and corner atoms serve as the active sites, since for clusters of this shape and size the ratio of edge atoms to the total number of atoms is indeed maximized. [ 9 ]
The preferential activation of O 2 at the perimeter sites is an example of a support effect that promotes the catalytic activity of gold nanoparticles. Besides enabling a proper dispersion of the deposited particles and hence a high surface-to-volume ratio, the metal oxide support also directly perturbs the electronic structure of the deposited gold clusters via various mechanisms, including strain-inducing and charge transfer. For gold deposited on magnesia (MgO), a charge transfer from singly charged oxygen vacancies (F-centers) at the MgO surface to the Au cluster has been observed. [ 8 ] This charge transfer induces a local perturbation in the electronic structure of the gold clusters at the perimeter sites, enabling the formation of resonance states as the antibonding 2 π ∗ {\displaystyle 2\pi ^{*}} orbital of oxygen interacts with the metal d-band. As the antibonding orbital is occupied, the O-O bond is significantly weakened and stretched, i.e. activated. In gas-phase model studies, the formation of activated super-oxo species O 2 − is found to correlate with the size-dependent electronic properties of the clusters. [ 21 ] [ 22 ] The activation of O 2 at the perimeter sites is also observed for defect-free surfaces and neutral gold clusters, but to a significantly smaller extent. The activity enhancing effect of charge transfer from the substrate to gold has also been reported by Chen and Goodman [ 7 ] in the case of a gold bilayer supported on ultrathin TiO 2 on Mo ( 112 ). In addition to charge transfer between the substrate and the gold nanoparticles, the support material has been observed to increase the catalytic activity of gold by inducing strain as a consequence of lattice mismatch. [ 9 ] The induced strains especially affect the Au atoms close to the substrate-cluster interface, resulting in a shift of the local d-band center towards energies closer to the Fermi level. This corroborates the periphery hypothesis and the creation of catalytically active bifunctional sites at the cluster-support interface. [ 3 ] Furthermore, the support-cluster interaction directly influences the size and shape of the deposited gold nanoparticles. In the case of weak interaction, less active 3D clusters are formed, whereas if the interaction is stronger more active 2D few-layer structures are formed. This illustrates the ability to fine-tune the catalytic activity of gold clusters via varying the support material as well as the underlying metal upon which the substrate has been grown. [ 8 ] [ 19 ]
Finally, it has been observed that the catalytic activity of supported gold clusters towards CO oxidation is further enhanced by the presence of water. [ 2 ] Invoking the periphery hypothesis, water promotes the activation of O 2 by co-adsorption onto the perimeter sites where it reacts with O 2 to form adsorbed hydroxyl (OH*) and hydroperoxo (OOH*) species. The reaction of these intermediates with adsorbed CO is very rapid, and results in the efficient formation of CO 2 with concomitant recovery of the water molecule. [ 8 ] | https://en.wikipedia.org/wiki/Heterogeneous_gold_catalysis |
Heterogeneous metal catalyzed cross-coupling is a subset of metal catalyzed cross-coupling in which a heterogeneous metal catalyst is employed. Generally heterogeneous cross-coupling catalysts consist of a metal dispersed on an inorganic surface or bound to a polymeric support with ligands . Heterogeneous catalysts provide potential benefits over homogeneous catalysts in chemical processes in which cross-coupling is commonly employed—particularly in the fine chemical industry—including recyclability and lower metal contamination of reaction products. [ 1 ] However, for cross-coupling reactions, heterogeneous metal catalysts can suffer from pitfalls such as poor turnover and poor substrate scope, which have limited their utility in cross-coupling reactions to date relative to homogeneous catalysts. [ 2 ] Heterogeneous metal catalyzed cross-couplings, as with homogeneous metal catalyzed ones, most commonly use Pd as the cross-coupling metal.
Pd-catalyzed cross-coupling reactions catalyzed by a heterogeneous catalyst are thought to generally proceed, not on the surface of the solid catalyst, but in the solution phase. [ 3 ] The solution-phase intermediates are not necessarily distinguishable from those obtained during homogeneous cross-couplings – for example, a heterogeneous Pd-catalyzed Suzuki reaction still proceeds via oxidative addition of the electrophile by Pd(0), transmetallation of a boronate, and reductive elimination to give product and regenerate Pd(0) (Figure 1A). The activity of heterogeneous catalysts in cross-coupling seems to be tied to the ability of the electrophile (usually an aryl halide) to undergo oxidative addition with an atom of Pd(0), whether on the solid catalyst surface or already in solution, after which the rest of the catalytic cycle will take place – in solution.
The role of the solid phase in heterogeneous metal catalyzed cross-coupling, then, is more subtle than one might expect. Rather than enabling the productive catalytic cycle, the solid phase acts as a reservoir of Pd that is accessible to the productive catalytic cycle. For heterogeneous catalytic cross-coupling which involves unligated Pd (for example, when Pd/C is used as the catalyst), there exists a significant equilibrium that partitions Pd(0) between atomic, solution-phase monomers, surface-bound Pd, colloidal Pd and higher order Pd aggregates (Figure 1B). Aggregation of Pd atoms into clusters ultimately leads to irreversible precipitation of insoluble metallic Pd, which limits the maximum turnover number that can be achieved. An effective heterogeneous cross-coupling catalyst will recapture monomeric Pd or lower order oligomers and colloids onto the solid phase in order to maintain low concentrations of these species in solution, disfavouring aggregation and favouring instead the productive elementary steps of cross-coupling. [ 4 ] This may explain the (perhaps counterintuitive) observation that lower catalyst loadings can improve turnover number for a heterogeneous cross-coupling catalyst system (Pd on porous glass, in the Heck reactions of 4-bromoacetophenone at 180 °C). [ 5 ]
The solid-phase to solution-phase mass transfer requirement for Pd in most heterogeneous cross-couplings has further implications. Because the supported ligand for a polymer-supported catalyst is not optimized for reactivity, and because the productive catalytic cycle usually ignores the supported ligand entirely even if present, “difficult” cross-coupling reactions which require fine tuning of the electronic and steric properties of the Pd catalyst – via expensive, designer ligands – are scarcely reported in a heterogeneous context. A 2021 survey of heterogeneous metal catalyzed cross-couplings in the fine chemical industry reported, out of 22 examples, 19 Suzuki or Heck reactions, which included only 2 examples with N-basic heterocycles, and only 4 examples with a singly-ortho-substituted electrophile (representative example in Scheme 1). [ 1 ] In nearly all these cases, reactions were initially developed with a homogeneous Pd catalyst (typically Pd(OAc)2 with either no exogenous ligand or PPh3 as ligand) on smaller scale, and only evaluated with heterogeneous Pd catalysts, (typically Pd/C or Pd black) for scaleup to decagram to multi-hundred-kilo scales, once process considerations such as process mass intensity and separation costs became significant. Notably, no polymer-supported catalysts were used; for these real-world examples of heterogeneous catalytic cross-coupling on scale, inorganic heterogeneous catalysts (such as Pd/C) are far cheaper and more robust than polymer-supported ligated Pd catalysts, and thus more commonly employed.
When designing a polymer-ligand solid support for Pd, the ligands should not simply be immobilized variants of homogeneous ligands which effect catalysis in the presence of Pd. Rather, immobilized ligands should optimize the redeposition of Pd onto the solid phase at the end of each catalytic cycle in a catalytically active form that is ready for a subsequent catalytic cycle. [ 6 ] Ligand sets which are rarely seen in homogeneous cross-coupling, then, appear in heterogeneous ligand-containing Pd catalysts. For example, Buchmeiser et al. have reported high turnover N,N-bidentate ligands (Figure 2) which achieve turnover numbers (TONs) of >10 5 in the Heck reactions of iodobenzene, and TON ca. 10 3 in the amination of bromobenzene. [ 7 ] These TONs are competitive with even the best solution TONs, giving clear advantages for this system for separation of the product from catalyst post-reaction.
The “shuttling” kinetics of Pd mass transfer (from solid phase to solution phase and back to solid phase) have been verified by three-phase test experiments, [ 8 ] while the solution-phase catalytic activity which characterizes most heterogeneous cross-coupling has been verified by TEM, hot filtration, and poisoning experiments. [ 9 ] [ 10 ] However, truly heterogeneous cross-coupling systems may exist. Poyatos et al. immobilized a Pd pincer carbene complex (Figure 3) on MK-10 clay and observed that while high TON (ca. 10 3 ) and TOF was maintained relative to the soluble catalyst, no activity was found in the solution for the supported catalyst – a strong indicator of a fully heterogeneous catalytic mechanism. [ 11 ]
For batch cross-couplings which use immobilized Pd, the concentration of solution-phase Pd increases dramatically when the reaction commences (as Pd is transferred out of the solid phase), and has decreased dramatically by the time full conversion has been achieved (by readsorption or precipitation onto the solid support). [ 12 ] [ 13 ] Such a kinetic profile matches the processing requirements of a batch process – although some amount of metal remains in solution post-reaction, the supported Pd catalyst can usually be recycled several times, despite the limitations described above.
In contrast, continuous flow systems do not allow for effective metal redeposition on the solid support; the reaction stream will transport the Pd through the support due to continuous metal leaching/readsorption (Figure 4). Cumulative periods of operation inevitably result in significant metal leaching from the flow system, depleting the supported catalyst's activity and giving low recyclability, with – typically – no particular benefit for reactivity. [ 14 ]
In principle, it is possible for the metal leaching inherent to continuous flow cross-coupling to be avoided. Plucinkski and coworkers developed a continuous Mizoroki-Heck and hydrogenation sequence consisting of two separated packed-bed reactors containing Pd/C. [ 15 ] Because the Pd/C-catalyzed hydrogenation proceeds via a heterogeneous mechanism, [ 16 ] metal leaching due to the second hydrogenation step is minimal, and Pd leached from the first part of the reactor during the Heck coupling can be recaptured by the second packed bed during the hydrogenation. By cycling the direction of flow between forward and reverse, catalytic activity could be maintained over two consecutive experiments, although a greater number of cycles would be desirable in order to vindicate this strategy for increasing turnover in solid-supported flow catalysts for cross-coupling.
Heterogeneous catalysts are easily removed from a reaction mixture by filtration. Although some amount of metal catalyst typically remains in the product from leaching, these amounts tend to be lower than those remaining after workup of a homogenous metal-catalyzed cross-coupling. [ 1 ]
A heterogeneous catalyst consisting of Pd supported by silica-coated Fe 2 O 3 /Fe 3 O 4 nanoparticles allows the reaction to be heated by electrical induction, and also allows facile magnetic separation of catalyst and product post-reaction. [ 17 ] Copper ferrite has been reported as a heterocycle arylation catalyst and can be similarly separated from the reaction with a magnet. [ 18 ]
Heterogeneous cross-coupling catalysts typically lose some portion of activity to metal leaching between different runs as a result of the solution-phase catalytic cycle (see above), and hence can only be recycled a finite number of times. [ 19 ]
Multiple groups [ 19 ] [ 20 ] have pointed out that the need for recycling is obviated at extremely high turnover and low catalyst loading, since in these cases the catalyst cost is negligible relative to the cost of other reaction components. As a result, for most cross-coupling reactions, in which heterogeneous catalysts generally require higher loadings than equivalent homogeneous ones, the benefits of heterogeneous catalysts afforded by the greater ease of recycling may be outweighed by the disadvantages – higher catalyst loadings, and the additional process costs. Additionally, when catalyst loadings are lower than 10 ppm – the regulatory limit for several metals including Pd in pharmaceutical APIs – separation of the metal following the reaction does not even need to be performed. This nullifies another of the commonly perceived advantages of heterogeneous catalysts over their homogeneous counterparts. | https://en.wikipedia.org/wiki/Heterogeneous_metal_catalyzed_cross-coupling |
In chemistry , a mixture is a material made up of two or more different chemical substances which can be separated by physical method. It is an impure substance made up of 2 or more elements or compounds mechanically mixed together in any proportion. [ 1 ] A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions , suspensions or colloids . [ 2 ] [ 3 ]
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds , without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. [ 4 ] Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point , may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them). [ 5 ] [ 6 ] [ 7 ]
All mixtures can be characterized as being separable by mechanical means (e.g. purification , distillation , electrolysis , chromatography , heat , filtration , gravitational sorting, centrifugation ). [ 8 ] [ 9 ] Mixtures differ from chemical compounds in the following ways:
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
The following table shows the main properties and examples for all possible phase combinations of the three "families" of mixtures :
Mixtures can be either homogeneous or heterogeneous : a mixture of uniform composition and in which all components are in the same phase, such as salt in water, is called homogeneous, whereas a mixture of non-uniform composition and of which the components can be easily identified, such as sand in water, it is called heterogeneous.
In addition, " uniform mixture " is another term for homogeneous mixture and " non-uniform mixture " is another term for heterogeneous mixture . These terms are derived from the idea that a homogeneous mixture has a uniform appearance , or only one phase , because the particles are evenly distributed. However, a heterogeneous mixture has constituent substances that are in different phases and easily distinguishable from one another. In addition, a heterogeneous mixture may have a uniform (e.g. a colloid) or non-uniform (e.g. a pencil) composition.
Several solid substances, such as salt and sugar , dissolve in water to form homogeneous mixtures or " solutions ", in which there are both a solute (dissolved substance) and a solvent (dissolving medium) present. Air is an example of a solution as well: a homogeneous mixture of gaseous nitrogen solvent, in which oxygen and smaller amounts of other gaseous solutes are dissolved. Mixtures are not limited in either their number of substances or the amounts of those substances, though in most solutions, the solute-to-solvent proportion can only reach a certain point before the mixture separates and becomes heterogeneous.
A homogeneous mixture is characterized by uniform dispersion of its constituent substances throughout; the substances exist in equal proportion everywhere within the mixture. Differently put, a homogeneous mixture will be the same no matter from where in the mixture it is sampled. For example, if a solid-liquid solution is divided into two halves of equal volume , the halves will contain equal amounts of both the liquid medium and dissolved solid (solvent and solute)
A solution is equivalent to a "homogeneous mixture". In solutions, solutes will not settle out after any period of time and they cannot be removed by physical methods, such as a filter or centrifuge . [ 12 ] As a homogeneous mixture, a solution has one phase (solid, liquid, or gas), although the phase of the solute and solvent may initially have been different (e.g., salt water).
Gases exhibit by far the greatest space (and, consequently, the weakest intermolecular forces) between their atoms or molecules; since intermolecular interactions are minuscule in comparison to those in liquids and solids, dilute gases very easily form solutions with one another. Air is one such example: it can be more specifically described as a gaseous solution of oxygen and other gases dissolved in nitrogen (its major component).
Examples of heterogeneous mixtures are emulsions and foams . In most cases, the mixture consists of two main constituents. For an emulsion, these are immiscible fluids such as water and oil. For a foam, these are a solid and a fluid, or a liquid and a gas. On larger scales both constituents are present in any region of the mixture, and in a well-mixed mixture in the same or only slightly varying concentrations. On a microscopic scale, however, one of the constituents is absent in almost any sufficiently small region. (If such absence is common on macroscopic scales, the combination of the constituents is a dispersed medium , not a mixture.) One can distinguish different characteristics of heterogeneous mixtures by the presence or absence of continuum percolation of their constituents. For a foam, a distinction is made between reticulated foam in which one constituent forms a connected network through which the other can freely percolate, or a closed-cell foam in which one constituent is present as trapped in small cells whose walls are formed by the other constituents. A similar distinction is possible for emulsions. In many emulsions, one constituent is present in the form of isolated regions of typically a globular shape, dispersed throughout the other constituent. However, it is also possible each constituent forms a large, connected network. Such a mixture is then called bicontinuous . [ 13 ]
Making a distinction between homogeneous and heterogeneous mixtures is a matter of the scale of sampling. On a coarse enough scale, any mixture can be said to be homogeneous, if the entire article is allowed to count as a "sample" of it. On a fine enough scale, any mixture can be said to be heterogeneous, because a sample could be as small as a single molecule. In practical terms, if the property of interest of the mixture is the same regardless of which sample of it is taken for the examination used, the mixture is homogeneous.
Gy's sampling theory quantitatively defines the heterogeneity of a particle as: [ 14 ]
where h i {\displaystyle h_{i}} , c i {\displaystyle c_{i}} , c batch {\displaystyle c_{\text{batch}}} , m i {\displaystyle m_{i}} , and m aver {\displaystyle m_{\text{aver}}} are respectively: the heterogeneity of the i {\displaystyle i} th particle of the population, the mass concentration of the property of interest in the i {\displaystyle i} th particle of the population, the mass concentration of the property of interest in the population, the mass of the i {\displaystyle i} th particle in the population, and the average mass of a particle in the population.
During sampling of heterogeneous mixtures of particles, the variance of the sampling error is generally non-zero.
Pierre Gy derived, from the Poisson sampling model, the following formula for the variance of the sampling error in the mass concentration in a sample:
in which V is the variance of the sampling error, N is the number of particles in the population (before the sample was taken), q i is the probability of including the i th particle of the population in the sample (i.e. the first-order inclusion probability of the i th particle), m i is the mass of the i th particle of the population and a i is the mass concentration of the property of interest in the i th particle of the population.
The above equation for the variance of the sampling error is an approximation based on a linearization of the mass concentration in a sample.
In the theory of Gy, correct sampling is defined as a sampling scenario in which all particles have the same probability of being included in the sample. This implies that q i no longer depends on i , and can therefore be replaced by the symbol q . Gy's equation for the variance of the sampling error becomes:
where a batch is that concentration of the property of interest in the population from which the sample is to be drawn and M batch is the mass of the population from which the sample is to be drawn.
Air pollution research [ 15 ] [ 16 ] show biological and health effects after exposure to mixtures are more potent than effects from exposures of individual components. [ 17 ] | https://en.wikipedia.org/wiki/Heterogeneous_mixture |
In dynamics , probability , physics , chemistry and related fields, a heterogeneous random walk in one dimension is a random walk in a one dimensional interval with jumping rules that depend on the location of the random walker in the interval.
For example: say that the time is discrete and also the interval. Namely, the random walker jumps every time step either left or right. A possible heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. Specifically, say that the interval has 9 sites (labeled 1 through 9), and the sites (also termed states) are connected with each other linearly (where the edges sites are connected their adjacent sites and together). In each time step, the jump probabilities (from the actual site) are determined when flipping a coin; for head we set: probability jumping left =1/3, where for tail we set: probability jumping left = 0.55. Then, a random number is drawn from a uniform distribution : when the random number is smaller than probability jumping left, the jump is for the left, otherwise, the jump is for the right. Usually, in such a system, we are interested in the probability of staying in each of the various sites after t jumps, and in the limit of this probability when t is very large, t → ∞ {\displaystyle t\rightarrow \infty } .
Generally, the time in such processes can also vary in a continuous way, and the interval is also either discrete or continuous. Moreover, the interval is either finite or without bounds. In a discrete system , the connections are among adjacent states. The basic dynamics are either Markovian , semi-Markovian , or even not Markovian depending on the model. In discrete systems, heterogeneous random walks in 1d have jump probabilities that depend on the location in the system, and/or different jumping time (JT) probability density functions (PDFs) that depend on the location in the system. [ citation needed ] General solutions for heterogeneous random walks in 1d obey equations ( 1 )-( 5 ), presented in what follows.
Random walks [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] can be used to describe processes in biology, [ 12 ] [ failed verification ] chemistry, [ 13 ] and physics, [ 14 ] [ 15 ] including chemical kinetics [ 13 ] and polymer dynamics. [ 14 ] [ 15 ] In individual molecules, random walks appear when studying individual molecules, [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] individual channels, [ 26 ] [ 27 ] individual biomolecules, [ 28 ] individual enzymes, [ 18 ] [ 20 ] [ 21 ] [ 22 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] and quantum dots . [ 34 ] [ 35 ] [ 36 ] Importantly, PDFs and special correlation functions [ clarification needed ] can be easily calculated from single molecule measurements but not from ensemble measurements. This unique information can be used for discriminating between distinct random walk models that share some properties [ which? ] , [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] and this demands a detailed theoretical analysis of random walk models. In this context, utilizing the information content in single molecule data is a matter of ongoing research. [ weasel words ]
The actual random walk obeys a stochastic equation of motion , but its probability density function (PDF) obeys a deterministic equation. PDFs of random walks can be formulated in terms of the (discrete in space) master equation [ 1 ] [ 12 ] [ 13 ] and the generalized master equation [ 3 ] or the (continuous in space and time) Fokker Planck equation [ 37 ] and its generalizations. [ 10 ] Continuous time random walks, [ 1 ] renewal theory , [ 38 ] and the path representation [ 3 ] [ 6 ] [ 8 ] [ 9 ] are also useful formulations of random walks. The network of relationships between the various descriptions provides a powerful tool in the analysis of random walks. Arbitrarily heterogeneous environments make the analysis difficult, especially in high dimensions. [ weasel words ]
Known important results in simple systems include:
The solution for the Green's function G i j ( t ; L ) {\displaystyle G_{ij}(t;L)} for a semi-Markovian random walk in an arbitrarily heterogeneous environment in 1D was recently given using the path representation. [ 6 ] [ 8 ] [ 9 ] (The function G i j ( t ; L ) {\displaystyle G_{ij}(t;L)} is the PDF for occupying state i at time t given that the process started at state j exactly at time 0.) A semi-Markovian random walk in 1D is defined as follows: a random walk whose dynamics are described by the (possibly) state- and direction-dependent JT-PDFs, ψ i j ( t ) {\displaystyle \psi _{ij}(t)} , for transitions between states i and i ± 1, that generates stochastic trajectories of uncorrelated waiting times that are not-exponential distributed. ψ i j ( t ) {\displaystyle \psi _{ij}(t)} obeys the normalization conditions (see fig. 1)
The dynamics can also include state- and direction-dependent irreversible trapping JT-PDFs, ψ i I ( t ) {\displaystyle \psi _{iI}(t)} , with I=i+L . The environment is heterogeneous when ψ i j ( t ) {\displaystyle \psi _{ij}(t)} depends on i . The above process is also a continuous time random walk and has an equivalent generalized master equation representation for the Green's function. G i j ( t ) {\displaystyle G_{ij}(t)} . [ 3 ] [ 6 ] [ 8 ] [ 9 ]
In a completely heterogeneous semi-Markovian random walk in a discrete system of L (> 1) states, the Green's function was found in Laplace space (the Laplace transform of a function is defined with, f ¯ ( s ) = ∫ 0 ∞ e − s t f ( t ) d t {\displaystyle {\bar {f}}(s)=\int _{0}^{\infty }e^{-st}f(t)\,dt} ). Here, the system is defined through the jumping time (JT) PDFs: ψ i j ( t ) {\displaystyle \psi _{ij}(t)} connecting state i with state j (the jump is from state i ). The solution is based on the path representation of the Green's function, calculated when including all the path probability density functions of all lengths:
Here,
and
Also, in Eq. ( 1 ),
and
with
and
For L = 1, Φ ¯ ( s ; L ) = 1 {\displaystyle {\bar {\Phi }}(s;L)=1} . In this paper, the symbol [ L /2], as appearing in the upper bound of the sum in eq. ( 5 ) is the floor operation (round towards zero). Finally, the factor Φ ( s , L ~ ) {\displaystyle \Phi (s,{\tilde {L}})} in eq. ( 1 ) has the same form as in Φ ¯ ( s ; L ) {\displaystyle {\bar {\Phi }}(s;L)} in eqs. ( 3 )-( 5 ), yet it is calculated on a lattice L ~ {\displaystyle {\tilde {L}}} . Lattice L ~ {\displaystyle {\tilde {L}}} is constructed from the original lattice by taking out from it the states i and j and the states between them, and then connecting the obtained two fragments. For cases in which a fragment is a single state, this fragment is excluded; namely, lattice L ~ {\displaystyle {\tilde {L}}} is the longer fragment. When each fragment is a single state, Φ ¯ ( s ; L ~ ) = 1 {\displaystyle {\bar {\Phi }}(s;{\tilde {L}})=1} .
Equations ( 1 )-( 5 ) hold for any 1D semi-Markovian random walk in a L-state chain, and form the most general solution in an explicit form for random walks in 1d.
Clearly, G ¯ i j ( s ; L ) {\displaystyle {\bar {G}}_{ij}(s;L)} in Eqs. ( 1 )-( 5 ) solves the corresponding continuous time random walk problem and the equivalent generalized master equation. Equations ( 1 )-( 5 ) enable
analyzing semi-Markovian random walks in 1D chains from a wide variety of aspects. Inversion to time domain gives the Green’s function, but also moments and correlation functions can be calculated from Eqs. ( 1 )-( 5 ), and then inverted into time domain (for relevant quantities). The closed-form G ¯ i j ( s ; L ) {\displaystyle {\bar {G}}_{ij}(s;L)} also manifests its utility when numerical inversion of the generalized master equation is unstable. Moreover, using G ¯ i j ( s ; L ) {\displaystyle {\bar {G}}_{ij}(s;L)} in simple analytical manipulations gives, [ 6 ] [ 8 ] [ 9 ] (i) the first passage time PDF, (ii)–(iii) the Green’s functions for a random walk with a special WT-PDF for the first event and for a random walk in a circular L-state 1D chain, and (iv) joint PDFs in space and time with many arguments.
Still, the formalism used in this article is the path representation of the Green's function G i j ( t ) {\displaystyle G_{ij}(t)} , and this supplies further information on the process. The path representation follows:
The expression for W i j ( t ; L ) {\displaystyle W_{ij}(t;L)} in Eq. ( 6 ) follows,
W i j ( t ; L ) {\displaystyle W_{ij}(t;L)} is the PDF of reaching state i exactly at time t when starting at state j exactly at time 0. This is the path PDF in time that is built from all paths with 2 n + γ i j {\displaystyle 2n+\gamma _{ij}} transitions that connect states j with i . Two different path types contribute to w i j ( τ , 2 n + γ i j ; L ) {\displaystyle w_{ij}(\tau ,2n+\gamma _{ij};L)} : [ 8 ] [ 9 ] paths made of the same states appearing in different orders and different paths of the same length of 2 n + γ i j {\displaystyle 2n+\gamma _{ij}} transitions. Path PDFs for translation invariant chains are mono-peaked. Path PDF for translation invariant chains mostly contribute to the Green's function in the vicinity of its peak, but this behavior is believed to characterize heterogeneous chains as well.
We also note that the following relation holds, W ¯ i j ( s ; L ) = W ¯ 1 L ( s ; L ) / W ¯ 1 L ~ ( s ; L ~ ) {\displaystyle {\bar {W}}_{ij}(s;L)={\bar {W}}_{1L}(s;L)/{\bar {W}}_{1{\tilde {L}}}(s;{\tilde {L}})} . Using this relation, we focus in what follows on solving w ¯ 1 L ( s ; L ) {\displaystyle {\bar {w}}_{1L}(s;L)} .
Complementary information on the random walk with that supplied with the Green’s function is contained in path PDFs. This is evident, when constructing approximations for Green’s functions, in which path PDFs are the building blocks in the analysis. [ 8 ] [ 9 ] Also, analytical properties of the Green’s function are clarified only in path PDF analysis. Here, presented is the recursion relation for w i j ( τ , 2 n + γ i j ; L ) {\displaystyle w_{ij}(\tau ,2n+\gamma _{ij};L)} in the length n of path PDFs for any fixed value of L . The recursion relation is linear in path PDFs with the h ¯ ( s , i ; L ) {\displaystyle {\bar {h}}(s,i;L)} s in Eq. ( 5 ) serving as the n independent coefficients, and is of order [ L / 2]:
The recursion relation is used for explaining the universal formula for the coefficients in Eq. ( 1 ).
The solution of the recursion relation is obtained by applying a z transform:
Setting z = 1 {\displaystyle z=1} in Eq. ( 9 ) gives W ¯ 1 L ( s ; L ) {\displaystyle {\bar {W}}_{1L}(s;L)} . The Taylor expansion of Eq. ( 9 ) gives w ¯ 1 L ( s , 2 n + γ 1 L ; L ) {\displaystyle {\bar {w}}_{1L}(s,2n+\gamma _{1L};L)} . The result follows:
In Eq. ( 10 ) c ¯ k 0 ( s ; L ) {\displaystyle {\bar {c}}_{k_{0}}(s;L)} is one for L = 2 , 3 {\displaystyle L=2,3} , and otherwise,
where
The initial number a i , n s {\displaystyle a_{i,n}s} follow:
and, | https://en.wikipedia.org/wiki/Heterogeneous_random_walk_in_one_dimension |
Heterogeneous nuclear ribonucleoproteins ( hnRNPs ) are complexes of RNA and protein present in the cell nucleus during gene transcription and subsequent post-transcriptional modification of the newly synthesized RNA (pre-mRNA). The presence of the proteins bound to a pre-mRNA molecule serves as a signal that the pre-mRNA is not yet fully processed and therefore not ready for export to the cytoplasm . [ 1 ] Since most mature RNA is exported from the nucleus relatively quickly, most RNA-binding protein in the nucleus exist as heterogeneous ribonucleoprotein particles. After splicing has occurred, the proteins remain bound to spliced introns and target them for degradation.
hnRNPs are also integral to the 40S subunit of the ribosome and therefore important for the translation of mRNA in the cytoplasm. [ 2 ] However, hnRNPs also have their own nuclear localization sequences (NLS) and are therefore found mainly in the nucleus. Though it is known that a few hnRNPs shuttle between the cytoplasm and nucleus, immunofluorescence microscopy with hnRNP-specific antibodies shows nucleoplasmic localization of these proteins with little staining in the nucleolus or cytoplasm. [ 3 ] This is likely because of its major role in binding to newly transcribed RNAs. High-resolution immunoelectron microscopy has shown that hnRNPs localize predominantly to the border regions of chromatin , where it has access to these nascent RNAs. [ 4 ]
The proteins involved in the hnRNP complexes are collectively known as heterogeneous ribonucleoproteins. They include protein K and polypyrimidine tract-binding protein (PTB), which is regulated by phosphorylation catalyzed by protein kinase A and is responsible for suppressing RNA splicing at a particular exon by blocking access of the spliceosome to the polypyrimidine tract . [ 5 ] : 326 hnRNPs are also responsible for strengthening and inhibiting splice sites by making such sites more or less accessible to the spliceosome. [ 6 ] Cooperative interactions between attached hnRNPs may encourage certain splicing combinations while inhibiting others. [ 7 ]
hnRNPs affect several aspects of the cell cycle by recruiting, splicing , and co-regulating certain cell cycle control proteins. Much of hnRNPs' importance to cell cycle control is evidenced by its role as an oncogene, in which a loss of its functions results in various common cancers. Often, misregulation by hnRNPs is due to splicing errors, but some hnRNPs are also responsible for recruiting and guiding the proteins themselves, rather than just addressing nascent RNAs.
hnRNP C is a key regulator of the BRCA1 and BRCA2 genes. In response to ionizing radiation, hnRNP C partially localizes to the site of DNA damage, and when depleted, S-phase progression of the cell is impaired. [ 8 ] Additionally, BRCA1 and BRCA2 levels fall when hnRNP C is lost. BRCA1 and BRCA2 are crucial tumor-suppressor genes which are strongly implicated in breast cancers when mutated. BRCA1 in particular causes G2/M cell cycle arrest in response to DNA damage via the CHEK1 signaling cascade. [ 9 ] hnRNP C is important for the proper expression of other tumor suppressor genes including RAD51 and BRIP1 as well. Through these genes, hnRNP is necessary to induce cell-cycle arrest in response to DNA damage by ionizing radiation . [ 7 ]
HER2 is overexpressed in 20-30% of breast cancers and is commonly associated with poor prognosis. It is therefore an oncogene whose differently spliced variants have been shown to have different functions. Knocking down hnRNP H1 was shown to increase the amount of an oncogenic variant Δ16HER2. [ 10 ] HER2 is an upstream regulator of cyclin D1 and p27, and its overexpression leads to the deregulation of the G1/S checkpoint. [ 11 ]
hnRNPs also play a role in DNA damage response in coordination with p53 . hnRNP K is rapidly induced after DNA damage by ionizing radiation. It cooperates with p53 to induce the activation of p53 target genes, thus activating cell-cycle checkpoints. [ citation needed ] p53 itself is an important tumor-suppressor gene sometimes known by the epithet “the guardian of the genome.” hnRNP K’s close association with p53 demonstrates its importance in DNA damage control.
p53 regulates a large group of RNAs that are not translated into protein, called large intergenic noncoding RNAs ( lincRNAs ). p53 suppression of genes is often carried out by a number of these lincRNAs, which in turn have been shown to act though hnRNP K. Through physical interactions with these molecules, hnRNP K is targeted to genes and transmits p53 regulation, thus acting as a key repressor within the p53-dependent transcriptional pathway. [ 12 ] [ 13 ]
hnRNP serves a variety of processes in the cell, some of which include:
The association of a pre-mRNA molecule with a hnRNP particle prevents formation of short secondary structures dependent on base pairing of complementary regions, thereby making the pre-mRNA accessible for interactions with other proteins.
hnRNP has been shown to regulate CD44 , a cell-surface glycoprotein , through splicing mechanisms. CD44 is involved in cell-cell interactions and has roles in cell adhesion and migration. Splicing of CD44 and the functions of the resulting isoforms are different in breast cancer cells, and when knocked down, hnRNP reduced both cell viability and invasiveness. [ 14 ]
Several hnRNPs interact with telomeres , which protect the ends of chromosomes from deterioration and are often associated with cell longevity. hnRNP D associates with the G-rich repeat region of the telomeres, possibly stabilizing the region from secondary structures which would inhibit telomere replication. [ 15 ]
hnRNP has also been shown to interact with telomerase , the protein responsible for elongating telomeres and prevent their degradation. hnRNPs C1 and C2 associate with the RNA component of telomerase, which improves its ability to access the telomere. [ 16 ] [ 17 ] [ 18 ]
Human genes encoding heterogeneous nuclear ribonucleoproteins include: | https://en.wikipedia.org/wiki/Heterogeneous_ribonucleoprotein_particle |
Water oxidation is one of the half reactions of water splitting :
2H 2 O → O 2 + 4H + + 4e − Oxidation (generation of dioxygen)
4H + + 4e − → 2H 2 Reduction (generation of dihydrogen)
2H 2 O → 2H 2 + O 2 Total Reaction
Of the two half reactions, the oxidation step is the most demanding because it requires the coupling of 4 electron and proton transfers and the formation of an oxygen-oxygen bond. This process occurs naturally in plants photosystem II to provide protons and electrons for the photosynthesis process and release oxygen to the atmosphere, [ 1 ] as well as in some electrowinning processes. [ 2 ] Since hydrogen can be used as an alternative clean burning fuel, there has been a need to split water efficiently. However, there are known materials that can mediate the reduction step efficiently therefore much of the current research is aimed at the oxidation half reaction also known as the Oxygen Evolution Reaction (OER). Current research focuses on understanding the mechanism of OER and development of new materials that catalyze the process. [ 3 ]
Both the oxidation and reduction steps are pH dependent. Figure 1 shows the standard potentials at pH 0 (strongly acidic) as referenced to the normal hydrogen electrode (NHE).
2 half reactions (at pH = 0) Oxidation 2H 2 O → 4H + + 4e − + O 2 E° = +1.23 V vs. NHE
Reduction 4H + + 4e − → 2H 2 E° = 0.00 V vs. NHE
Overall 2H 2 O → 2H 2 + O 2 E°cell = +1.23 V; ΔG = 475 kJ/mol
Water splitting can be done at higher pH values as well however the standard potentials will vary according to the Nernst equation and therefore shift by -59 mV for each pH unit increase. However, the total cell potential (difference between oxidation and reduction half cell potentials) will remain 1.23 V. This potential can be related to Gibbs free energy (ΔG) by:
ΔG°cell = −nFE°cell
Where n is the number of electrons per mole products and F is the Faraday constant . Therefore, it takes 475 kJ of energy to make one mole of O2 as calculated by thermodynamics. However, in reality no process can be this efficient. Systems always suffer from an overpotential that arise from activation barriers, concentration effects and voltage drops due to resistance. The activation barriers or activation energy is associated with high energy transition states that are reached during the electrochemical process of OER. The lowering of these barriers would allow for OER to occur at lower overpotentials and faster rates.
Heterogeneous OER is sensitive to the surface in which the reaction takes place and is also affected by the pH of the solution. The general mechanism for acidic and alkaline solutions is shown below. Under acidic conditions water binds to the surface with the irreversible removal of one electron and one proton to form a platinum hydroxide. [ 4 ] In an alkaline solution a reversible binding of hydroxide ion coupled to a one electron oxidation is thought to precede a turnover-limiting electrochemical step involving the removal of one proton and one electron to form a surface oxide species. [ 5 ] The shift in mechanism between the pH extremes has been attributed to the kinetic facility of oxidizing hydroxide ion relative to water. Using the Tafel equation , one can obtain kinetic information about the kinetics of the electrode material such as the exchange current density and the Tafel slope. [ 6 ] OER is presumed to not take place on clean metal surfaces such as platinum, but instead an oxide surface is formed prior to oxygen evolution. [ 7 ]
OER has been studied on a variety of materials including:
Preparation of the surface and electrolysis conditions have a large effect on reactivity (defects, steps, kinks, low coordinate sites) therefore it is difficult to predict an OER material's properties by its bulk structure. Surface effects have a large influence on the kinetics and thermodynamics of OER.
Platinum has been a widely studied material for OER because it is the catalytically most active element for this reaction. [ 13 ] It exhibits exchange current density values on the order of 10 −9 A/cm 2 . Much of the mechanistic knowledge of OER was gathered from studies on platinum and its oxides. [ 5 ] It was observed that there was a lag in the evolution of oxygen during electrolysis. Therefore, an oxide film must first form at the surface before OER begins. [ 5 ] The Tafel slope, which is related to the kinetics of the electrocatalytic reaction, was shown to be independent of the oxide layer thickness at low current densities but becomes dependent on oxide thickness at high current densities [ 14 ]
Iridium oxide (IrO 2 ) is the industry standard OER catalyst used in polymer electrolyte membrane electrolysis due to its high stability. [ 15 ] It was first proposed in the 1970s as an OER catalyst, and has been widely researched and implemented since then. [ 16 ]
Ruthenium oxide (RuO 2 ) shows some of the best performance as an OER material in acidic environments. It has been studied since the early 1970s as a water oxidation catalyst with one of the lowest reported overpotentials for OER at the time. [ 17 ] It has since been investigated for OER in Ru(110) single crystal oxide surfaces, [ 18 ] compact films, [ 19 ] Titanium supported films. [ 20 ] RuO 2 films can be prepared by thermal decomposition of ruthenium chloride on inert substrates. [ 19 ]
The spinel compounds are extremely useful in designing heterogeneous water oxidation catalysts. Generally these spinels are ofter coated over the carbon materials and reduced further to create oxygen vacancy in their lattice to enhance the water oxidation capabilities. [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Heterogeneous_water_oxidation |
Heterogonesis describes the segregation of parental genomes into distinct cell lineages in the dividing zygote . [ 1 ] [ 2 ]
Fertilisation occurs when an ovum fuses with a sperm , forming a zygote . Normally, the genomes of the two parents assort into two diploid bi-parental daughter cells. In a heterogoneic cell division, the genome of only one parent assorts into a single daughter cell following the formation of a tripolar (rather than the normal bipolar) spindle apparatus . [ 3 ] Heterogonesis allows for chromosomal segregation to occur in a dispermic fertilisation which may subsequently result in chimerism or sesquizygosis .
The term heterogonesis was coined in 2016 by Destouni and Vermeesch who observed the phenomenon in bovine zygotes. [ 1 ] The word is derived from the Greek meaning "different parental origin". | https://en.wikipedia.org/wiki/Heterogonesis |
In biology, a heterokaryon is a multinucleate cell that contains genetically different nuclei . This is a special type of syncytium . This can occur naturally, such as in the mycelium of fungi during sexual reproduction, or artificially as formed by the experimental fusion of two genetically different cells, as e.g., in hybridoma technology .
The term heterokaryosis for the property of having genetically unlike nuclei
is borrowed from the German Heterokaryosis , which was coined by the German botanist Hans Burgeff in a 1912 paper about his work on the fungus Phycomyces nitens . [ 1 ] It is based on Greek hetero , meaning "different," and karyon , meaning "kernel" or in this case "nucleus.". [ 2 ]
Heterokaryons are found in the life cycle of yeasts, for example Saccharomyces cerevisiae , a genetic model organism. The heterokaryon stage is produced from the fusion of two haploid cells. This transient heterokaryon can produce further haploid buds, or cell nuclei can fuse and produce a diploid cell, which can then undergo mitosis.
The term was first used for ciliate protozoans such as Tetrahymena . This has two types of cell nuclei, a large, somatic macronucleus and a small, germline micronucleus . Both exist in a single cell at the same time and carry out different functions with distinct cytological and biochemical properties.
Many fungi (notably the arbuscular mycorrhizal fungi ) exhibit heterokaryosis. The haploid nuclei within a mycelium may differ from one another not merely by accumulating mutations , but by the non-sexual fusion of genetically distinct fungal hyphae , although a self / non-self recognition system exists in Fungi and usually prevents fusions with non-self. [ 3 ] [ 1 ]
Heterokaryosis is also common upon mating, as in Dikarya ( Ascomycota and Basidiomycota ). Mating requires the encounter of two haploid nuclei of compatible mating types. These nuclei do not immediately fuse, and remain haploid in a n+n state until the very onset of meiosis: this phenomenon is called delayed karyogamy. Heterokaryosis can lead to individuals that have different nuclei in different parts of their mycelium, although in ascomycetes, particularly in " Neurospora ", nuclei have been shown to flow and mix throughout the mycelium. [ 4 ] In heterokaryons, the notion of individual itself becomes vague since the rule of “one genome = one individual” does not apply any more. [ 5 ] Genetic heterogeneity within an individual is indeed usually considered to be detrimental, as selfish variants may be selected for and disrupt the integrity of the individual level. [ 6 ]
Heterokaryosis is most common in fungi, but also occurs in slime molds . This happens because the nuclei in the 'plasmodium' form are the products of many pairwise fusions between amoeboid haploid individuals. When genetically divergent nuclei come together in the plasmodium form, cheaters have been shown to emerge. However, genetic homogeneity among fusing amoeboid serves to maintain the multicellular plasmodium. [ 7 ]
A medical example is a heterokaryon composed of nuclei from Hurler syndrome and Hunter syndrome . Both of these diseases result in problems in mucopolysaccharide metabolism. However, a heterokaryon of nuclei from both of these diseases exhibits normal mucopolysaccharide metabolism, proving that the two syndromes affect different proteins and so can correct each other in the heterokaryon. | https://en.wikipedia.org/wiki/Heterokaryon |
Heterologous expression refers to the expression of a gene or part of a gene in a host organism that does not naturally have the gene or gene fragment in question. Insertion of the gene in the heterologous host is performed by recombinant DNA technology . The purpose of heterologous expression is often to determine the effects of mutations and differential interactions on protein function. It provides an easy path to efficiently express and experiment with combinations of genes and mutants that do not naturally occur.
Depending on the duration of recombination in the host genome, two types of heterologous expression are available, long-term (stable) and short-term (transient). Long-term is a potentially permanent integration into the gene and short-term is a temporary modification that lasts for 1 to 3 days. [ 1 ]
After being inserted in the host, the gene may be integrated into the host DNA , causing permanent expression, or not integrated, causing transient expression . Heterologous expression can be done in many types of host organisms. The host organism can be a bacterium, yeast, mammalian cell, or plant cell. This host is called the " expression system ". Homologous expression , on the other hand, refers to the overexpression of a gene in a system from where it originates.
Gene identification can be accomplished using computer-based methods known as heterologous screening techniques. [ 2 ] A digital library of cDNA sequences has data from many sequencing projects and allows for easy access to sequence information for known genes.
If a genomic sequence is unknown or unavailable, DNA undergoes a process of random fragmentation, cloning, and screening to determine its phenotype. [ 3 ] Although various methods can be used to obtain a particular gene, the easiest way to reveal the components of an unknown DNA sequence is by first identifying its restriction enzymes. Restriction enzymes are enzymes responsible for cleaving DNA into fragments at a specific site within molecules known as restriction sites . These enzymes can be located in bacteria or archaea and are known to protect DNA from foreign invasion of viruses. Restriction enzymes are distinct, and each recognizes only a specific sequence of base pairs within DNA, many of which tend to be palindromic . By locating each enzyme, the sequence associated with the restriction enzyme can be identified and isolated.
If the sequence is known, a technique referred to as the Polymerase chain reaction (PCR) can be used to isolate a gene of interest. The purpose of PCR is to not only identify but to amplify a particular DNA segment through phases of denaturation, annealing, and extension. Denaturation places a double-stranded DNA template in high-temperature conditions of 95 °C to break its weak hydrogen bonds and enforce strand separation. Annealing cools down the reaction to allow hydrogen bonds to reform and promote primer binding to their complementary sequences on the single-stranded template of DNA. Finally, the extension step involves DNA polymerase recognizing the primed single-stranded DNA, and therefore isolating specific sequences necessary for replication. [ 3 ]
Gene gun delivery/Biolistics has been an attractive method for gene delivery due to its non-viral properties, and in addition to viral transduction, is one of the most common methods. This allows for less adverse immune responses and a smaller chance of viral infection compared to viral-based transfer methods. Rather than using a viral vector, this technique utilizes physical methods, specifically using helium propulsion to deliver transformation vectors. Gene gun delivery has been traditionally used for the generation of transgenic plants as it has been able to efficiently and effectively penetrate the cell walls. More recently, this technique has been successful in animal cells that cannot tolerate high-level bombardment, where instead DNA gold particles are delivered at lower helium pressure. This method has been successfully used both in vitro and in vivo. [ 4 ]
Electroporation is a method that uses high voltage to create pores in the membranes of mammalian cells. By pulsing with electricity, local areas of the cell membrane transiently destabilize and DNA can then enter the cell. At appropriate field strengths, damage to the host cell in minimal. [ 5 ] This technique can be used for both short-term and long-term transfectants. [ 6 ] It is also effective with almost any tissue type and has displayed high levels of gene delivery with an increase in the distribution of cells expressing the DNA. [ 5 ]
Viral transduction is a method that uses viral vectors and is used for the stable introduction of genes into the target cells. In this method, the viral vector (virion) infects host cells that by directly transporting DNA into the nucleus of the cell. Two common types of viruses used for transduction are adenoviruses, which tend to be transient, and lentiviruses, which integrate the DNA into the genome. Lentiviral vectors have also been an attractive viral tool because they can transduce in non-dividing cells, allowing for stable transfer in a large range of host cell types. [ 7 ]
In lipofection , the gene is injected with the help of liposomes . The DNA sequence is encapsulated in a liposome with the same composition as the cell membrane. This method allows it to directly fuse with the membrane, or be endocytosed, which then releases the DNA into the cell. Lipofection is often used because it works with many different cell types, is highly reproducible, and is a fast method for both stable and transient expression. [ 8 ]
Genes are subjected to heterologous expression often to study specific protein interactions. E. coli , yeast ( S. cerevisiae , P. pastoris ), immortalized mammalian cells , and amphibian oocytes (i.e. unfertilized eggs) are commonly for studies that require heterologous expression. [ 9 ] In choosing a particular system, economic and qualitative aspects have to be considered. Prokaryotic expression is widely used in recombinant DNA technology to form easily manipulated proteins by well-known genetic methods with a low costing medium. [ 10 ] Some limitations include intracellular accumulation of heterologous proteins, improper folding of the peptide, lack of post-transcriptional modifications, the potential for product degradation due to traces of protease impurities, and production of endotoxin.
Prokaryotic and eukaryotic systems, most commonly bacteria, yeast, insects, and mammalian cells, and occasionally amphibians, fungi, and protists are used for studies that require heterologous expression. Bacteria, especially E. coli, yeast (S. cerevisiae, P. pastoris), insects, and amphibian (oocyte) cells have been used as effective hosts for expressing foreign proteins. Generally, prokaryotes are easier to work with and better understood and are often the preferable host system. It is widely used in recombinant DNA technology to form easily manipulated proteins by well-known genetic methods with a low costing medium. For membrane proteins though, researchers have observed that mammalian cells are more effective. This is because there is a lack of post-transcriptional modifications in prokaryotic systems. Limitations include intracellular accumulation of heterologous proteins, improper folding of the peptide, the potential for product degradation due to trace of protease impurities, and production of endotoxin.
A popular system utilized is Escherichia coli because of its rapid growth rate (~20–30 minutes), capacity for continuous fermentation and relatively low cost. [ 9 ] Additionally, yeast has the capacity to express a high relative volume of heterologous protein. Specifically, up to 30% of proteins produced in yeast can be the heterologous gene product. There also are safe strains of E. coli that have been successfully generated to scale up production. In addition to E. coli's attractive host properties, this host is incredibly popular due to researchers having a large amount of knowledge about its genetics, including the complete genomic sequence. However, issues arise either due to the sequence of the gene of interest and those that are due to the limitations of E. coli as a host. For example, proteins expressed in large amounts in E.coli tend to precipitate and aggregate, which then requires another denaturation, renaturation recovery method. Finally, E. coli is only optimally effective in specific conditions dependent on the gene being inserted.
Bacillus subtilis (B. subtilis) is a gram-positive, non-pathogenic organism that does not produce lipopolysaccharides (LPS). LPS, found in gram negative bacteria, is known to cause many degenerative disorders in humans and animals and affects the production of proteins in E. coli. Therefore, although it is deemed potentially safe, B. subtilis has not been officially categorized as generally regarded by the FDA as safe (GRAS). B. subtilis has genetic characteristics that readily transform it with bacteriophages and plasmids . [ 11 ] Additionally, it can facilitate more purification steps through direct secretion into the culture medium , and can easily be scaled up because of its ability to non-specifically secrete these proteins. To date, B. subtilis has been used to successfully study different biological mechanisms including metabolism, gene regulation, differentiation, and protein expression and generation of bioactive products. It is also the most well studied gram-positive bacteria in the world, with the genomic information being widely available. Drawbacks of this host system include reduced or non-expression of the protein of interest and production of degradative extracellular proteases that target heterologous proteins. Finally, despite B. subtilis’ attractive properties, these limitations result in E.coli being the default host system over B. subtilis. However, with more research and optimization, B. subtilis has the potential to produce membrane proteins in large scales. [ 12 ]
Eukaryotic cells can be used as an alternative to prokaryotic expression of proteins intended for therapeutic use. Yeast is a single cell fungus that uses high expression levels, fast growth, and inexpensive maintenance, similar to prokaryotic systems. Because yeast is a food organism, it is also favorable for the production of pharmaceutical products, as opposed to E. coli which may contain toxins. Yeast also has a relatively quick growth rate, with a doubling time of 90 minutes on simple media, and is easily manipulated. Similar to E.coli, yeast also has the complete genomic sequence available. The most commonly used yeast is S. cerevisiae, which can carry out post-translational modifications such as protein processing and protein folding. S. cerevisiae , P. pastoris are simple eukaryotic organisms that grow quickly and are highly adaptable. [ 13 ] Eukaryotic systems have human applications and successfully made vaccines for hepatitis B and Hantavirus . There is a progressive increase in the use of mammalian cells for recombinant technology and synthesis of complete biological activity. This system secretes and glycosylates proteins, while introducing proper protein folding and post-translational modifications. However, when increased glycosylation abilities are employed, hyper-mannosylation, or the addition of a large number of mannose, is often observed. This hinders proper protein folding. Overall, yeast is a compromise between bacterial and mammalian cells, and remains a popular host system. The cost of production for when using yeast as an expression system is high because of the slow growth and expensive nutrient requirement.
Baculoviruses are viruses that infect insects, and have emerged as a system for heterologous expression in eukaryotes– the insect. As a eukaryote, they have several important functions not present in the yeast and bacterial systems, including protein modification, processing, and eukaryotic transport system. Because they can be propagated in very high concentrations, it simplifies the process of obtaining large amounts of recombinant proteins. Moreover, researchers found that the expressed proteins are usually localized in their respective compartments and are easy to harvest. These genomes also tend to be very large and can incorporate larger fragments compared to prokaryotic systems, and also are noninfectious to vertebrates and mammalian cells. However, these baculoviral vectors are subject to limitations. Because these viruses natively infect invertebrates, there could be differences in protein processing of vertebrates to cause some harmful modifications. [ 14 ]
The unfertilized oocyte of a frog, or Xenopus laevis, has also been utilized as an expression system for heterologous expression. [ 15 ] Initially used to express the acetylcholine receptor in 1982, since then it has been used for a variety of reasons. These oocytes are produced by frogs year round and thus are relatively abundant, and translation occurs with high fidelity. Of the many limitations to the oocyte system, one major one is that the produced heterologous proteins interact with the frog oocyte's proteins which changes its behavior compared to what it would be in a mammalian cell. Additionally, where mammals are diploid, these xenopus have four homologous copies of each chromosome and thus, proteins derived may have a different function. More research is needed to examine the protein production of X. laevis systems.
Although mammalian cells are cultured with more difficulty, are time-consuming, require more nutrients, and are significantly more costly, a protein that requires post-translational modifications must be expressed in mammalian cells to protect the clinical efficacy and fidelity of the product. However, even between mammalian cells, there are observed differences, for example differences in glycosylation between rodent and human cells. Even within one cell line, often stabilizing a cell line results in modified glycosylation patterns. The only commercially viable way to use mammalian cells as host systems is a high value end product. Common mammalian cell lines, especially in research include the COS-7 from Cercopithecus aethiops monkey, CHO from the Cricetulus griseus hamster, and the HEK293 human kidney line. [ 3 ]
A common protist eukaryotic expression system is the slime mold, Dictyostelium discoideum , and is unique as it has a circular plasmid, packaged similarly to chromatin. [ 16 ] As a simple eukaryotic haploid organism, it can grow in high concentrations without the expensive conditions of mammalian cell culture, and perform post-translational modifications. The protein itself is expressed in several forms including as membrane attached, secreted, or cell associated, and can glycosylate protein product.
Fungi are natural decomposers of many ecosystems. As a result, it is able to secrete large amounts of enzymes, more so than bacterial based systems. However, utilizing fungi as expression systems has seen several barriers, especially due to the lack of knowledge regarding fungal genetics due to its inherent complexity. The filamentous fungi specifically have been a host system of interest, and includes Penicillium (where penicillin was derived), Trichoderma reesei , and Aspergillus niger . Filamentous fungi are efficient at producing extracellular proteins, bypassing the additional step of cell breaking to extract proteins. Some also have inexpensive growth and media conditions. Fungi also contain glycolysation and modification capabilities that are helpful for eukaryotic proteins. Additionally, they have also successfully produced vaccine related proteins, and some filamentous fungi have been deemed GRAS by the FDA. However, the major drawback of using this host system is that yields are extremely low and not economically viable. Moreover, the low amount of protein that is produced is often degraded by fungal proteases. Some approaches to address this have been using protease deficient strains. Researchers are also attempting different gene disruption methods. With a better understanding of fungal gene regulation and expression, we can expect filamentous fungi to become a possibly viable host system. [ 16 ]
Researchers often use heterologous expression techniques to study protein interactions. For example, bacteria has been optimized in the heterologous expression and biosynthesis of nitrogenase through NifEN. This is able to be expressed and engineered in E.coli. [ 17 ] Through this host, it remains exceedingly challenging to heterologously express a complex, heteromultimeric metalloprotein like NifEN with a full complement of subunits, metalloclusters, and functionality. The NifEN variant engineered in this bacterial host can retain its cofactor efficacy at analogous cofactors-binding sites, which provide proof for heterologous expression and encourage future investigation of this metalloenzyme. Additionally, there have been recent reports of the utility of new filamentous fungal systems in the production of industrial proteins. Advantages include high transformation frequencies, the production of proteins at neutral pH, low viscosity of the fermentation broth due to strain selection for a nonfilamentous format and short fermentation times. Many human gene products, such as albumin, IgG, and interleukin 6, have been expressed in heterologous systems with varying degrees of success. [ 18 ] Inconsistent results have hinted at a shift from gene-by-gene studies to a whole-organism approach to post-translational modification. Oocytes are readily optimized for their large size and translational capacity, which is able to observe integrated cell responses. This applies to studies of single molecules within single cells to medium-throughput drug-screening applications. By screening oocytes for the expression of injected cDNA, the application of micro injection as a model for heterologous expression can be studied further in terms of cell signaling, transport, architecture, and protein function. [ 15 ]
Heterologous expression systems can be clinically incorporated to evaluate enzyme activity under highly reproducible conditions for in vitro drug development. [ 19 ] [ 13 ] This works to minimize patient risk by serving as an alternative to highly invasive procedures, or potential for the development adverse drug reactions. Enzyme activity analysis requires various expression systems to classify enzyme variants. As opposed to other animals, the expression of functional recombinant proteins is a costly process for mammalian cells specifically, due to low expression levels of enzymes contributing to drug metabolism. As a result, post-translational modification processes differ between species and limit accurate comparisons. The first heterologous protein product released to the market was human insulin , most commonly known as Humulin . This product was made with a strain of E. coli. Most bacteria, including E. coli, are unable to successfully secrete such proteins, requiring added cell harvesting, cell disruption, and product isolation steps before protein purification.
Like Humulin, there have been many successes using heterologous expression for drug development. Heterologous expression via cloning of genes producing natural bioactive products of interest also can be expressed in host systems and scaled up for drug production. For example, several clinically relevant natural products in fungi are difficult to culture in laboratory settings. However, after identification of the corresponding active gene clusters, these genes can be cloned into yeast and expressed as well to produce the product of interest in a more cost and time effective way. This method can also be used to discover new drugs. In this experiment, previously unstudied fungal genetic sequences can be characterized and expressed, which allows the production of new natural products. [ 20 ] However, with mutagenesis of genes towards a more biologically relevant compound, this can then be expressed to yield a new genetically modified product. [ 21 ]
Another important use of heterologous expression is to screen different drugs in a host system rather than a more expensive or difficult to sustain native system. An example of this would be using Mycobacterium marinum as an alternative host system compared to directly using Mycobacterium tuberculosis . M. tuberculosis requires high biosafety level facilities for drug screening and has a slow growth rate which makes the process expensive and time-consuming. Therefore, researchers tested a closely related and less hazardous M. marinum , which heterologous expression of two drug activators, became an accurate model to test tuberculosis drugs in. [ 22 ] An example examining a more focused drug target is the heterologous expression of ion channel proteins to test different cardiac ion channel drugs that alter their function to address heart disease. [ 23 ] Similarly, drug screening can occur with heterologous expression of cloned receptors. [ 24 ] The benefits of using heterologous expression here is that it produces large amounts of target receptors of drugs of interest, and is generally inexhaustible, reproducible, and inexpensive. These receptors could then be used in assays to test the effectiveness and specificity of drug binding. Moreover, the produced receptors themselves could be used as therapeutics. They could serve as decoys for toxins or excess signaling molecules, and bind/attenuate these molecules.
Recombinant technology has also played a role in biofuel development. This has been explored using expression systems found in bacteria, plants, and yeast. Specifically, the heterologous expression of cellulase enzymes utilizes cellulose , the most abundant raw material worldwide. Cellulolytic enzymes are found in plants, insects, bacteria, and fungi, which assist in the conversion of biomass to biofuel. [ 25 ] Specifically, Cellulose is hydrolyzed to form sugar molecules. For example, the manipulation of cellular expression levels in cellulolytic enzymes is necessary in fungal hosts in order to overcome degradation. However, bioprocessing has proved difficult in forming high-yield proteins and requires the incorporation of other enzymes. Various microbial strains can be combined to express enzymes that result in a total increase of enzyme yield on an economically viable scale. [ 25 ]
Golden rice was a GMO created in 2005 through heterologous expression as a humanitarian effort to address the effects of Vitamin A deficiency. Oryza sativa rice was transfected with a gene to produce β-carotene, a Vitamin A precursor that has a yellow-orange color. [ 26 ]
Several limitations prevent heterologous expression to generate products at an economically feasible level that have been observed in bacteria, yeast, and plants. [ 25 ] First, these methods are still extremely expensive compared to natural production, often take a longer time to generate, and require special conditions for host culture and induction of expression. Additionally, most methods have still not been optimized, with some even having lower expression than the native organism. Especially with biosynthetic genes for natural biologically active products of interest, researchers have discovered that they express very poorly in laboratory conditions, especially due to generally large gene sizes. [ 27 ] Although protein products are produced, they are often generated at a very low yield, are poorly secreted due to low solubility, or produce other unwanted byproducts. Successful instances of heterologous production of target products are primarily seen with low-complexity genes with a small number of operons. This is often due to the mismatch in regulatory and expression induction pathways and machinery, and reflected in the observed degradation of certain amino acid sequences, decreased specific activity, incorrect membrane transportation, and glycosylation effects. Additionally, there are barriers during the translation process, where host tRNA effects reduce the efficiency of translation, specifically the recognition by host ribosomes. Similarly, modifications to the tRNA-linked bases that differ from the host system may reduce the translation of proteins quantitatively and qualitatively. For example, translating a foreign gene in another host system that did not contain the required tRNA resulted in early termination at the codon where the tRNA was missing. Collectively, with heterologous expression, when the host translation systems are different from the native system that the genes are being introduced from, coding errors, frameshifts, or premature or improper sequence termination are frequent. Consequently, this leads to a lower yield of functional proteins or unintended overexpression of the protein. These errors are especially prominent with the significant and unnatural increase in demand for host system biological machinery. Often, this causes the reallocation of cellular resources from normal processes to the production of the heterologous protein. Specifically, this strains tRNA and amino acid supply, quality control systems and secretion systems, as well as NADPH required for anabolic processes. Moreover, unnatural heterologous protein buildup also leads to adverse host effects. [ 28 ] Overall the implications are not only evident in low product yields but also host stress responses and decreased host viability.
There are many areas of active research addressing these limitations of utilizing heterologous expression, especially in a commercial setting. One approach is to determine the optimal host system for each specific target protein product, as different, especially non-native proteins often have deviant behavior in other organisms, and some host systems may produce higher yields, or require more mild conditions than others. Specifically, incorporating different promoters or optimized genetic sequences and using variants or strains of organisms that allow for these post-translational modifications is an approach of interest. For example, variants that have efficient secretion may allow for the production of heterologous expression products to be industrially relevant. Additionally, increasing the availability of cofactors, improving protein folding capacity, improving gene promoters, and designing control systems that change based on differing resource demands. Another approach is incorporating transient periods where heterologous production is lowered to allow for host system recovery. To address errors in translation, it is possible to overexpress tRNA to mitigate any shortages, however, base modifications are still heavily dependent on the host system. [ 28 ] Scientists have attempted to design a universal system to attempt to mitigate these concerns, but there is still much to be discovered about the connection between hosts and native producers, and the implications of the increased burden on host systems.
Advancements in recombinant DNA technology have revolutionized the idea of treating diseases through the reconstruction or replacement of faulty genes. Gene therapy is a technique that transplants normal genes into cells that contain missing or defective genes to correct genetic disorders. Nevertheless, several concerns have been raised about the efficacy of gene therapy due to its limited success rate in clinical trials. [ 29 ] [ 30 ] Over the years, immense efforts have been placed to fully understand vectors, viruses, and their communication with their host's immune system. However, not every defense system reacts the same. Some patients have experienced an “autoimmune-like” response where their body rejects this treatment. The heterologous genes are recognized as foreign to the host and can induce cytokine-mediated inflammatory responses that are ultimately destroyed by their cytotoxic T-cells. This has called into question the relationship between vector dosage and cellular toxicity as scientists recognize that inappropriate activation of these responses can cause severe side effects not only to the disease-infected cells but other healthy parts of the body. [ 29 ]
Genetic modification used to address concerns outside of medical necessities such as eye color, athletic abilities, intelligence, etc. is one example that has brought into question the ethicality of its purpose. [ 31 ] Eugenics , which places a group of desirable human characteristics over another has led to fears of potential backlash toward genetically modified, or genetically unmodified individuals in society. In the case of germline editing, there is no guarantee that treatment will provide an absolute cure throughout the patient's life and/or whether those genes can be passed onto their offspring. [ 32 ] Although CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) , a technique that allows for genes to be edited with ease may present certain benefits, but it may also cause further risks to the human body. For example, there may be technical limitations to CRISPR editing. Until advancements are made to fully equip scientists with the knowledge to understand all potential benefits and risks associated with CRISPR editing, concerns regarding the safety of its applications remain. [ 32 ] The possibility that editing could bring about an incomplete or inaccurate genetic sequence has been reported in several experiments related to both animal and human cell line studies. [ 32 ] Since it is almost impossible to predict a favorable outcome with certainty, this technique makes germline editing all the more difficult to promote as a definite cure for anyone suffering from terminal illnesses. | https://en.wikipedia.org/wiki/Heterologous_expression |
A homologous booster shot involves the administration of the same vaccine as previously administered, while a heterologous booster shot involves the administration of a different vaccine. [ citation needed ]
" Heterologous prime-boost immunization is administration of two different vectors or delivery systems expressing the same or overlapping antigenic inserts." [ 1 ]
"An effective vaccine usually requires more than one time immunization in the form of prime-boost. Traditionally the same vaccines are given multiple times as homologous boosts. New findings suggested that prime-boost can be done with different types of vaccines containing the same antigens. In many cases such heterologous prime-boost can be more immunogenic than homologous prime-boost." [ 2 ]
This article about vaccines or vaccination is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heterologous_vaccine |
Heterolysis (hetero = other/different, lysis = cell breakdown) is the spontaneous death and disintegration of a cell from factors other than itself. [ 1 ] In contrast, autolysis happens when a cell dies due to its own secretions or signaling. [ 1 ] Some external factors that cause heterolysis are hypoxia , biological factors, chemical agents like drugs or free radical reactions, physical factors like electric shock, trauma, extreme radiation, and immunological reactions such as inflammation or allergic reactions. [ 2 ] Such extrinsic cell death is important in executing proper immune response functions. This is commonly seen when a bacterial or viral infection occurs and the pathogen forces the cell to stop apoptosis to avoid death of host cells. In such scenarios, heterolytic factors make it possible to combat infections by lysing the infected cells. [ 3 ]
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heterolysis_(biology) |
In chemistry , heterolysis or heterolytic fission (from Greek ἕτερος (heteros) ' different ' and λύσις (lusis) ' loosening ' ) is the process of cleaving/breaking a covalent bond where one previously bonded species takes both original bonding electrons from the other species. [ 1 ] During heterolytic bond cleavage of a neutral molecule, a cation and an anion will be generated. Most commonly the more electronegative atom keeps the pair of electrons becoming anionic while the more electropositive atom becomes cationic.
Heterolytic fission almost always happens to single bonds ; the process usually produces two fragment species.
The energy required to break the bond is called the heterolytic bond dissociation energy , which is similar (but not equivalent) to homolytic bond dissociation energy commonly used to represent the energy value of a bond.
One example of the differences in the energies is the energy required to break a H−H bond
The discovery and categorization of heterolytic bond fission was clearly dependent on the discovery and categorization of the chemical bond.
In 1916, chemist Gilbert N. Lewis developed the concept of the electron-pair bond, in which two atoms share one to six electrons, thus forming the single electron bond, a single bond, a double bond , or a triple bond . [ 3 ] This became the model for a covalent bond.
In 1932 Linus Pauling first proposed the concept of electronegativity , which also introduced the idea that electrons in a covalent bond may not be shared evenly between the bonded atoms. [ 4 ]
However, the ions had been studied before bonds mainly by Svante Arrhenius in his 1884 dissertation. Arrhenius pioneered development of ionic theory and proposed definitions for acids as molecules that produced hydrogen ions, and bases as molecules that produced hydroxide ions.
The rate of reaction for many reactions involving unimolecular heterolysis depends heavily on rate of ionization of the covalent bond. The limiting reaction step is generally the formation of ion pairs. One group in Ukraine did an in-depth study on the role of nucleophilic solvation and its effect on the mechanism of bond heterolysis. They found that the rate of heterolysis depends strongly on the nature of the solvent .
For example, a change of reaction medium from hexane to water increases the rate of tert-Butyl chloride (t-BuCl) heterolysis by 14 orders of magnitude. [ 5 ] This is caused by very strong solvation of the transition state . The main factors that affect heterolysis rates are mainly the solvent's polarity and electrophilic as well as its ionizing power. The polarizability, nucleophilicity and cohesion of the solvent had a much weaker effect on heterolysis. [ 5 ]
However, there is some debate on effects of the nucleophilicity of the solvent, some papers claim it has no effect, [ 6 ] while some papers claim that more nucleophilic solvents decrease the reaction rate. [ 7 ] | https://en.wikipedia.org/wiki/Heterolysis_(chemistry) |
A heteromer is something that consists of different parts; the antonym of homomeric . Examples are: | https://en.wikipedia.org/wiki/Heteromer |
The heterometallic copper-aluminum superatom is a Mackay ‐type icosahedral cluster compound with formula [Cu 43 Al 12 ](Cp*) 12 . It is an open‐shell 67‐electron superatom . [ 1 ] It is notable for its large electron count compared to other heterometallic superatoms and its unprecedented electron structure of an open-shell configuration. As of 2018, [update] it was the largest heterometallic superatom to be created using wet chemical synthesis . [ 2 ]
Combining (pentamethylcyclopentadienyl)aluminium(I) with mesitylcopper(I) [ Wikidata ] in benzene at 78 °C under an inert atmosphere for 48 hours, followed by slow cooling, forms black cocrystals of the compound with benzene:
The material is created from single-atom sources of copper and aluminium, which spontaneously separate from the organic compounds to form the superatom cluster. The exergonic nature of the reaction demonstrates that this specific arrangement of copper and aluminum atoms is stable. [ 2 ]
According to crystallographic and computational analysis, the complex contains a central copper atom surrounded by a first icosahedral shell containing twelve copper atoms, followed by a second icosahedral shell containing twelve aluminium atoms (located at the vertices of the icosahedron) and thirty copper atoms (located at the midpoints of the edges). This central group of metal atoms (of radius 5.137 Å) [ 3 ] is surrounded by twelve pentamethylcyclopentadienyl ligands (one attached to each aluminium atom) that assist in protecting it against further reaction. [ 1 ] It was the first ligated heterometallic Mackay-type cluster to be discovered. | https://en.wikipedia.org/wiki/Heterometallic_copper-aluminum_superatom |
Heteromorphosis (/ ˌhɛt.ə.rəʊˈmɔrf.ə.sɪs /, / ˌhɛt.rə.- /) ( Greek : έτερος – other; morphe – form) refers to situations where an organ or tissue is different from the expected, [ 1 ] either because of (embryonic) development anomalies, or after reparative regeneration following a trauma . [ 2 ] The difference include an abnormal location, [ 3 ] or an abnormal shape. [ 4 ] It should not be confused with homeosis , which means big change in tissue structure of an organ. [ 1 ] Heteromorphosis is an example of the imperfection of some manifestations of the regenerative capacity.
Jacques Loeb offered this term in 1892, then he was in experiments of distortion of polarity of hydroids . [ 2 ] Many organisms from protozoans to the chordate may have heteromorphosis examples, but it is easier to find in lower forms of animals: | https://en.wikipedia.org/wiki/Heteromorphosis |
A heteronuclear molecule is a molecule composed of atoms of more than one chemical element . [ 1 ] [ 2 ] For example, a molecule of water (H 2 O) is heteronuclear because it has atoms of two different elements, hydrogen (H) and oxygen (O).
Similarly, a heteronuclear ion is an ion that contains atoms of more than one chemical element . For example, the carbonate ion ( CO 2− 3 ) is heteronuclear because it has atoms of carbon (C) and oxygen (O). The lightest heteronuclear ion is the helium hydride ion (HeH + ). This is in contrast to a homonuclear ion , which contains all the same kind of atom, such as the dihydrogen cation , or atomic ions that only contain one atom such as the hydrogen anion (H − ).
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heteronuclear_molecule |
The heteronuclear single quantum coherence or heteronuclear single quantum correlation experiment, normally abbreviated as HSQC , is used frequently in NMR spectroscopy of organic molecules and is of particular significance in the field of protein NMR . The experiment was first described by Geoffrey Bodenhausen and D. J. Ruben in 1980. [ 1 ] The resulting spectrum is two-dimensional (2D) with one axis for proton ( 1 H) and the other for a heteronucleus (an atomic nucleus other than a proton), which is usually 13 C or 15 N . The spectrum contains a peak for each unique proton attached to the heteronucleus being considered. The 2D HSQC can also be combined with other experiments in higher-dimensional NMR experiments, such as NOESY-HSQC or TOCSY-HSQC.
The HSQC experiment is a highly sensitive 2D-NMR experiment and was first described in a 1 H— 15 N system, but is also applicable to other nuclei such as 1 H— 13 C and 1 H— 31 P. The basic scheme of this experiment involves the transfer of magnetization on the proton to the second nucleus, which may be 15 N, 13 C or 31 P, via an INEPT (Insensitive nuclei enhanced by polarization transfer) step. After a time delay ( t 1 ), the magnetization is transferred back to the proton via a retro-INEPT step and the signal is then recorded. In HSQC, a series of experiments is recorded where the time delay t 1 is incremented. The 1 H signal is detected in the directly measured dimension in each experiment, while the chemical shift of 15 N or 13 C is recorded in the indirect dimension which is formed from the series of experiments.
The 15 N HSQC experiment is one of the most frequently recorded experiments in protein NMR. The HSQC experiment can be performed using the natural abundance of the 15 N isotope , but normally for protein NMR, isotopically labeled proteins are used. Such labelled proteins are usually produced by expressing the protein in cells grown in 15 N-labelled media.
Each residue of the protein , with the exception of proline , has an amide proton attached to a nitrogen in the peptide bond . The HSQC provides the correlation between the nitrogen and amide proton, and each amide yields a peak in the HSQC spectra. Each residue (except proline) therefore can produce an observable peak in the spectra, although in practice not all the peaks are always seen due to a number of factors. Normally the N-terminal residue (which has an NH 3 + group attached) is not readily observable due to exchange with solvent. [ 3 ] In addition to the backbone amide resonances, sidechains with nitrogen-bound protons will also produce peaks.
In a typical HSQC spectrum, the NH 2 peaks from the sidechains of asparagine and glutamine appear as doublets on the top right corner, and a smaller peak may appear on top of each peak due to deuterium exchange from the D 2 O normally added to an NMR sample, giving these sidechain peaks a distinctive appearance. The sidechain amine peaks from tryptophan are usually shifted downfield and appear near the bottom left corner. The backbone amide peaks of glycine normally appear near the top of the spectrum.
The 15 N HSQC is normally the first heteronuclear spectrum acquired for the assignment of resonances where each amide peak is assigned to a particular residue in the protein. If the protein is folded, the peaks are usually well-dispersed, and most of the individual peaks can be distinguished. If there is a large cluster of severely overlapped peaks around the middle of the spectrum, that would indicate the presence of significant unstructured elements in the protein. In such cases where there are severe overlap of resonances the assignment of resonances in the spectra can be difficult. The assignment of the HSQC spectrum requires other experiments, ideally using triple resonance experiments with 15 N and 13 C-labelled proteins, that provide sequential connectivities between residues so that the resonances can be linked to particular residues and sequentially assigned. The assignment of the spectrum is essential for a meaningful interpretation of more advanced NMR experiments such as structure determination and relaxation analysis.
Chemicals labelled with 15 N isotope are relatively inexpensive, and the 15 N HSQC is a sensitive experiment whereby a spectrum can be acquired in a relatively short time, the 15 N HSQC is therefore often used to screen candidates for their suitability for structure determination by NMR, as well as optimization of the sample conditions. The time-consuming process of structure determination is usually not undertaken until a good HSQC spectrum can be obtained. The HSQC experiment is also useful for detecting binding interface in protein-protein interaction, as well the interactions with ligands such as drugs. By comparing the HSQC of the free protein with the one bound to the ligand, changes in the chemical shifts of some peaks may be observed, and these peaks are likely to lie on the binding surface where the binding perturbed their chemical shifts. [ 4 ] The 15 N HSQC may also be used in relaxation analysis in the studies of molecular dynamics of proteins, the determination of ionization constant , and other studies.
This experiment provides correlations between a carbon and its attached protons. The constant time (CT) version of 1 H— 13 C HSQC is normally used as it circumvents the issue of splitting of signal due to homonuclear 13 C— 13 C J couplings which reduces spectral resolution. [ 5 ] The "constant time" refers to the entire evolution period between the two INEPT steps which is kept constant in this experiment. If this evolution period is set to be the inverse of the J-coupling constant, then the sign of the magnetization of those carbons with an odd number of aliphatic carbon attached will be opposite to those with an even number. For example, if the C β of leucine appears as a positive peak (2 aliphatic carbons attached), then the C γ (3 aliphatic carbons attached) and C α (1 aliphatic carbons attached) would appear negative.
The use of 1 H— 31 P HSQC is relatively uncommon in lipidomics, however use of 31 P in lipidomics dates back to the 1990s. [ 6 ] The use of this technique is limited with respect to mass spectrometry due to its requirement for much bigger sample size, however the combination of 1 H— 31 P HSQC with mass spectrometry is regarded as a thorough approach to lipidomics and techniques for 'dual spectroscopy' are becoming available. [ 7 ] | https://en.wikipedia.org/wiki/Heteronuclear_single_quantum_coherence_spectroscopy |
The mononuclear spot test or monospot test , a form of the heterophile antibody test , [ 1 ] is a rapid test for infectious mononucleosis due to Epstein–Barr virus (EBV) . It is an improvement on the Paul–Bunnell test. [ 2 ] The test is specific for heterophile antibodies produced by the human immune system in response to EBV infection. Commercially available test kits are 70–92% sensitive and 96–100% specific , with a lower sensitivity in the first two weeks after clinical symptoms begin. [ 3 ] [ 4 ]
The United States Center for Disease Control deems the monospot test not to be very useful. [ 5 ]
It is indicated as a confirmatory test when a physician suspects EBV, typically in the presence of clinical features such as fever, malaise, pharyngitis, tender lymphadenopathy (especially posterior cervical; often called "tender glands") and splenomegaly . [ 6 ]
In the case of delayed or absent seroconversion , an immunofluorescence test could be used if the diagnosis is in doubt. It has the following characteristics: VCAs (Viral Capsid Antigen) of the IgM class, antibodies to EBV early antigen (anti-EA), absent antibodies to EBV nuclear antigen (anti-EBNA) [ citation needed ]
One source states that the specificity of the test is high, virtually 100%, [ 7 ] Another source states that a number of other conditions can cause false positives . [ 5 ] Rarely, however, a false positive heterophile antibody test may result from systemic lupus erythematosus , toxoplasmosis , rubella , lymphoma and leukemia . [ 7 ]
However, the sensitivity is only moderate, so a negative test does not exclude EBV. This lack of sensitivity is especially the case in young children, many of whom will not produce detectable amounts of the heterophile antibody and will thus have a false negative test result. [ 8 ]
It will generally not be positive during the 4–6 week incubation period before the onset of symptoms. The highest amount of heterophile antibodies occurs 2 to 5 weeks after the onset of symptoms. [ 9 ] If positive, it will remain so for at least six weeks. [ 10 ] An elevated heterophile antibody level may persist up to 1 year. [ 9 ]
The test is usually performed using commercially available test kits which detect the reaction of heterophile antibodies in a person's blood sample with horse or cow red blood cell antigens. These test kits work on the principles of latex agglutination or immunochromatography . Using this method, the test can be performed by individuals without specialized training, and the results may be available in as little as five minutes. [ 8 ] [ 11 ]
Manual versions of the test rely on the agglutination of horse erythrocytes by heterophile antibodies in patient serum. Heterophile means it reacts with proteins across species lines. [ 12 ] Heterophile also can mean that it is an antibody that reacts with antigens other than the antigen that stimulated it (an antibody that crossreacts). [ citation needed ] A 20% suspension of horse red cells is used in an isotonic 3–8% sodium citrate formulation.
One drop of the patient's serum to be tested is mixed on an opal glass slide with one drop of a particulate suspension of guinea-pig kidney stroma, and a suspension of ox red cell stroma; sera and suspensions are mixed with a wooden applicator 10 times.
Ten microliters of the horse red cell suspension are then added and mixed with each drop of adsorbed serum.
The mixture is left undisturbed for one minute (not rocked or shaken).
It is then examined for the presence or absence of red cell agglutination .
If stronger with the sera adsorbed with guinea-pig kidney, the test is positive.
If stronger with the sera adsorbed with ox red cell stroma, the test is negative.
If agglutination is absent in both mixtures, the test is negative.
A known 'positive' and 'negative' control serum is tested with each batch of test sera. [ citation needed ] | https://en.wikipedia.org/wiki/Heterophile_antibody_test |
Heterophile antigens are antigens of similar nature, if not identical, that are present in different tissues in different biological species, classes, or kingdoms. [ 1 ] Usually different species have different antigen sets, but the hetereophile antigen is shared by different species. Other heterophile antigens are responsible for some diagnostic serological tests such as:
Chemically, heterophile antigens are composed of lipoprotein-polysaccharide complexes . There is a possibility of there being identical chemical groupings in the structure of mucopolysaccharids and lipids .
Example: Forssman antigen , cross reacting microbial antigen so antibodies to these antigens produced by one species cross react with antigens of other species. It is widely present in some plants bacteria animal and birds. However it is not present in rabbit. Therefore antibodies are produced in rabbit serum by injecting the antigen (antiforssman antibodies).
This immunology article is a stub . You can help Wikipedia by expanding it .
This microbiology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heterophile_antigen |
Heteroresistance is a phenotype in which a bacterial isolate contains sub-populations of cells with increased antibiotic resistance when compared with the susceptible main population. [ 1 ] This phenomenon is known to be highly prevalent among several antibiotic classes and bacterial isolates and associated with treatment failure through the enrichment of low frequencies of resistant subpopulations in the presence of antibiotics. [ 2 ] Heteroresistance is known to be highly unstable, meaning that the resistance sub-population can revert to susceptibility within a limited number of generations of growth in the absence of antibiotic. [ 2 ] Regarding the instability and the transient characteristic of heteroresistance subpopulations, the detection of this subpopulation often face difficulties by the conventional minimum inhibitory concentration methods, such as Etests and disk diffusion tests . [ 3 ] [ 1 ] The gold standard for heteroresistance detection is population analysis profile tests (PAP-tests) which has less instances of false positive and false negative outcomes than the conventional methods making it more reliable. [ 1 ] It is however a labour intensive and costly heteroresistance detection method making it difficult to implement in clinical microbiology laboratories. [ 1 ] Hence, there is a significant demand for clinical microbiology laboratories to use rapid standardized methods to identify heteroresistance in pathologic specimen to prescribe a proper antibiotic treatment for patients.
The enrichment of resistance sub-populations can be due to the acquisition of resistant mutations that are genetically stable but have high fitness cost or due to the enrichment of sub-population with increased copy number of resistance-conferring tandem gene amplifications . [ 4 ] [ 1 ] Tandem gene amplification of antibiotic resistance genes, which results in an increased gene dosage of the resistance genes, is the most common mechanism for unstable heteroresistance in Gram-negative bacteria. [ 4 ] [ 5 ]
Two other mechanisms conferring unstable heteroresistance, resulting in an increased gene dosage of the resistance genes, are plasmid copy number increase and transposition of the resistance genes onto cryptic plasmids which increases in copy number. However, this mechanism is considered unstable, leading to a rapid return to susceptibility when antibiotics are not present. [ 5 ] | https://en.wikipedia.org/wiki/Heteroresistance |
Heterostasis is a medical term. It is a neologism coined by Walter Cannon intended to connote an alternative but related meaning to its lexical sibling Homeostasis , which means 'same state'. Any device, organ, system or organism capable of Heterostasis (multistable behavior) can be represented by an abstract state machine composed of a characteristic set of related, interconnected states, linked dynamically by change processes allowing transition between states.
Although the term 'Heterostasis' is an obvious rearrangement (by syntactically substituting the prefix 'Hetero-' for its dichotome 'Homeo-', and likewise swapping the semantic reference, from 'same'/'single' to 'different'/'many'), the endocrinologist Hans Selye [ 1 ] is generally credited with its invention. An excellent overview of the two concepts is contained in the Cambridge Handbook of Psychophysiology, Chapter 19. [ 2 ] Selye's ideas were used by Gunther et al., [ 3 ] in which dimensionless numbers (allometric invariance analysis) were used to investigate the existence of heterostasis in canine cardiovascular systems.
The equivalent term Allostasis is used in biological contexts, where state change is analog (continuous), but Heterostasis is sometimes preferred for systems which possess a finite number of distinct (discrete) internal states, such as those containing computational processes. The term Servomechanism is usually used in industrial/mechanical situations (non-biological and non-computational) where it often applies to analog state change, e.g. in a Direct Current Servomotor . | https://en.wikipedia.org/wiki/Heterostasis_(cybernetics) |
The term heterostrain was proposed in 2018 in the context of materials science to simplify the designation of possible strain situations in van der Waals heterostructures where two (or more) two-dimensional materials are stacked on top of each other. [ 1 ] These layers can experience the same deformation (homostrain) or different deformations (heterostrain). In addition to twist , heterostrain can have important consequences on the electronic [ 2 ] [ 3 ] and optical [ 4 ] properties of the resulting structure. As such, the control of heterostrain [ 5 ] [ 6 ] is emerging as a sub-field of straintronics in which the properties of 2D materials are controlled by strain. Recent works have reported a deterministic control of heterostrain by sample processing [ 7 ] or with the tip of an AFM [ 8 ] of particular interest in twisted heterostructures. Heterostrain alone (without twist) has also been identified as a parameter to tune the electronic properties of van der Waals structures as for example in twisted graphene layers with biaxial heterostrain. [ 9 ]
Heterostrain is constructed from the Greek prefix hetero- (different) and the noun strain . It means that the two layers constituting the structure are subject to different strains. [ 1 ] This is in contrast with homostrain in which the two layers as subject to the same strain. [ 1 ] Heterostrain is designated as "relative strain" by some authors. [ 10 ]
For simplicity, the case of two graphene layers is considered. The description can be generalized for the case of different 2D materials forming an heterostructure .
In nature, the two graphene layers usually stack with a shift of half a unit cell. This configuration is the most energetically favorable and is found in graphite . If one layer is strained while the other is left intact, a moiré pattern signaling the regions where the atomic lattices of the two layers are in or out of registry. The shape of the moiré pattern depends on the type of strain.
In General, a layer can be deformed by an arbitrary combination of both types of heterostrain.
Heterostrain can be measured by scanning tunneling microscope which provides images showing both the atomic lattice of the first layer and the moiré superlattice. Relating the atomic lattice to the moiré lattice allows to determine entirely the relative arrangement of the layers (biaxial, uniaxial heterostrain and twist). [ 11 ] The method is immune to calibration artifacts which affect the image of the two layers identically which cancels out in the relative measurement. Alternatively, with a well calibrated microscope and if biaxial heterostrain is low enough, it is possible to determine twist and uniaxial heterostrain from the knowledge of the moiré period in all directions. [ 12 ] On the contrary it is much more difficult to determine homostrain which necessitates a calibration sample.
Heterostrain is generated during the fabrication of the 2D materials stack. It can result from a meta-stable configuration during bottom up assembly [ 1 ] or from the layer manipulation in the tear and stack technique. [ 13 ] It has been shown to be ubiquitous in twisted graphene layers near the magic twist angle and to be the main factor in the flat band width of those systems. [ 2 ] [ 3 ] Heterostrain has a much larger impact on electronic properties than homostrain. [ 1 ] It explains some of the sample variability which had previously been puzzeling. [ 3 ] [ 14 ] Research is now moving towards understanding the impact of spatial fluctuations of heterostrain. [ 15 ] | https://en.wikipedia.org/wiki/Heterostrain |
" Heterosubtypic immunity (HSI) is defined as cross-protection to infection with an influenza A virus serotype other than the one used for primary infection ." [ 1 ] In layman's terms : an 'infection with "seasonal" influenza A viruses could induce immunity against unrelated sub-strains.' [ 2 ] | https://en.wikipedia.org/wiki/Heterosubtypic_immunity |
Heterothallic species have sexes that reside in different individuals. The term is applied particularly to distinguish heterothallic fungi , which require two compatible partners to produce sexual spores, from homothallic ones, which are capable of sexual reproduction from a single organism.
In heterothallic fungi, two different individuals contribute nuclei to form a zygote. Examples of heterothallism are included for Saccharomyces cerevisiae , Aspergillus fumigatus , Aspergillus flavus , Penicillium marneffei and Neurospora crassa . The heterothallic life cycle of N. crassa is given in some detail, since similar life cycles are present in other heterothallic fungi.
Certain heterothallic species (such as Neurospora tetrasperma ) are called "pseudo-homothallic". Instead of separating into four individual spores by two meiosis events, only a single meiosis occurs, resulting in two spores, each with two haploid nuclei of different mating types (those of its parents). This results in a spore which can mate with itself ( intratetrad mating , automixis ). [ 1 ]
The yeast Saccharomyces cerevisiae is heterothallic. This means that each yeast cell is of a certain mating type and can only mate with a cell of the other mating type. During vegetative growth that ordinarily occurs when nutrients are abundant, S. cerevisiae reproduces by mitosis as either haploid or diploid cells. However, when starved, diploid cells undergo meiosis to form haploid spores. [ 2 ] Mating occurs when haploid cells of opposite mating type, MATa and MATα, come into contact. Ruderfer et al. [ 3 ] pointed out that such contacts are frequent between closely related yeast cells for two reasons. The first is that cells of opposite mating type are present together in the same ascus , the sac that contains the tetrad of cells directly produced by a single meiosis , and these cells can mate with each other. The second reason is that haploid cells of one mating type, upon cell division, often produce cells of the opposite mating type with which they may mate.
Katz Ezov et al. [ 4 ] presented evidence that in natural S. cerevisiae populations clonal reproduction and a type of “self-fertilization” (in the form of intratetrad mating) predominate. Ruderfer et al. [ 3 ] analyzed the ancestry of natural S. cerevisiae strains and concluded that outcrossing occurs only about once every 50,000 cell divisions. Thus, although S. cerevisiae is heterothallic, it appears that, in nature, mating is most often between closely related yeast cells. The relative rarity in nature of meiotic events that result from outcrossing suggests that the possible long-term benefits of outcrossing (e.g. generation of genetic diversity ) are unlikely to be sufficient for generally maintaining sex from one generation to the next. [ citation needed ] Rather, a short-term benefit, such as meiotic recombinational repair of DNA damages caused by stressful conditions such as starvation may be the key to the maintenance of sex in S. cerevisiae . [ 5 ] [ 6 ]
Aspergillus fumigatus , is a heterothallic fungus. [ 7 ] It is one of the most common Aspergillus species to cause disease in humans with an immunodeficiency . A. fumigatus , is widespread in nature, and is typically found in soil and decaying organic matter, such as compost heaps, where it plays an essential role in carbon and nitrogen recycling. Colonies of the fungus produce from conidiophores thousands of minute grey-green conidia (2–3 μm) that readily become airborne. A. fumigatus possesses a fully functional sexual reproductive cycle that leads to the production of cleistothecia and ascospores . [ 8 ]
Although A. fumigatus occurs in areas with widely different climates and environments, it displays low genetic variation and lack of population genetic differentiation on a global scale. [ 9 ] Thus the capability for heterothallic sex is maintained even though little genetic diversity is produced. As in the case of S. cereviae , above, a short-term benefit of meiosis may be the key to the adaptive maintenance of sex in this species.
A. flavus is the major producer of carcinogenic aflatoxins in crops worldwide. It is also an opportunistic human and animal pathogen , causing aspergillosis in immunocompromised individuals. In 2009, a sexual state of this heterothallic fungus was found to arise when strains of opposite mating type were cultured together under appropriate conditions. [ 10 ]
Sexuality generates diversity in the aflatoxin gene cluster in A. flavus , [ 11 ] suggesting that production of genetic variation may contribute to the maintenance of heterothallism in this species.
Henk et al. [ 12 ] showed that the genes required for meiosis are present in T. marneffei, and that mating and genetic recombination occur in this species.
Henk et al. [ 12 ] concluded that T. marneffei is sexually reproducing, but recombination in natural populations is most likely to occur across spatially and genetically limited distances resulting in a highly clonal population structure. Sex is maintained in this species even though very little genetic variability is produced. Sex may be maintained in T. marneffei by a short-term benefit of meiosis, as in S. cerevisiae and A. fumigatus , discussed above.
The sexual cycle of N. crassa is heterothallic. Sexual fruiting bodies (perithecia) can only be formed when two mycelia of different mating type come together. Like other ascomycetes , N. crassa has two mating types that, in this case, are symbolized by ‘A’ and ‘a’. There is no evident morphological difference between the ‘A’ and 'a' mating type strains. Both can form abundant protoperithecia, the female reproductive structure (see figure, top of § ). Protoperithecia are formed most readily in the laboratory when growth occurs on solid (agar) synthetic medium with a relatively low source of nitrogen. [ 13 ] Nitrogen starvation appears to be necessary for expression of genes involved in sexual development. [ 14 ] The protoperithecium consists of an ascogonium, a coiled multicellular hypha that is enclosed in a knot-like aggregation of hyphae. A branched system of slender hyphae, called the trichogyne, extends from the tip of the ascogonium projecting beyond the sheathing hyphae into the air. The sexual cycle is initiated (i.e. fertilization occurs) when a cell (usually a conidium) of opposite mating type contacts a part of the trichogyne (see figure, top of § ). Such contact can be followed by cell fusion leading to one or more nuclei from the fertilizing cell migrating down the trichogyne into the ascogonium. Since both ‘A’ and ‘a’ strains have the same sexual structures, neither strain can be regarded as exclusively male or female. However, as a recipient, the protoperithecium of both the ‘A’ and ‘a’ strains can be thought of as the female structure, and the fertilizing conidium can be thought of as the male participant.
The subsequent steps following fusion of ‘A’ and ‘a’ haploid cells, have been outlined by Fincham and Day, [ 15 ] and by Wagner and Mitchell. [ 16 ] After fusion of the cells, the further fusion of their nuclei is delayed. Instead, a nucleus from the fertilizing cell and a nucleus from the ascogonium become associated and begin to divide synchronously. The products of these nuclear divisions (still in pairs of unlike mating type, i.e. ‘A’ / ‘a’) migrate into numerous ascogenous hyphae, which then begin to grow out of the ascogonium. Each of these ascogenous hypha bends to form a hook (or crozier) at its tip and the ‘A’ and ‘a’ pair of haploid nuclei within the crozier divide synchronously. Next, septa form to divide the crozier into three cells. The central cell in the curve of the hook contains one ‘A’ and one ‘a’ nucleus (see figure, top of § ). This binuclear cell initiates ascus formation and is called an “ascus-initial” cell. Next the two uninucleate cells on either side of the first ascus-forming cell fuse with each other to form a binucleate cell that can grow to form a further crozier that can then form its own ascus-initial cell. This process can then be repeated multiple times.
After formation of the ascus-initial cell, the ‘A’ and ‘a’ nucleus fuse with each other to form a diploid nucleus (see figure, top of § ). This nucleus is the only diploid nucleus in the entire life cycle of N. crassa . The diploid nucleus has 14 chromosomes formed from the two fused haploid nuclei that had 7 chromosomes each. Formation of the diploid nucleus is immediately followed by meiosis . The two sequential divisions of meiosis lead to four haploid nuclei, two of the ‘A’ mating type and two of the ‘a’ mating type. One further mitotic division leads to four ‘A’ and four ‘a’ nuclei in each ascus . Meiosis is an essential part of the life cycle of all sexually reproducing organisms, and in its main features, meiosis in N. crassa seems typical of meiosis generally.
As the above events are occurring, the mycelial sheath that had enveloped the ascogonium develops as the wall of the perithecium, becomes impregnated with melanin, and blackens. The mature perithecium has a flask-shaped structure.
A mature perithecium may contain as many as 300 asci, each derived from identical fusion diploid nuclei. Ordinarily, in nature, when the perithecia mature the ascospores are ejected rather violently into the air. These ascospores are heat resistant and, in the lab, require heating at 60 °C for 30 minutes to induce germination. For normal strains, the entire sexual cycle takes 10 to 15 days. In a mature ascus containing 8 ascospores, pairs of adjacent spores are identical in genetic constitution, since the last division is mitotic, and since the ascospores are contained in the ascus sac that holds them in a definite order determined by the direction of nuclear segregations during meiosis. Since the four primary products are also arranged in sequence, the pattern of genetic markers from a first-division segregation can be distinguished from the markers from a second-division segregation pattern. | https://en.wikipedia.org/wiki/Heterothallism |
Heterotopy is an evolutionary change in the spatial arrangement of an organism's embryonic development , complementary to heterochrony , a change to the rate or timing of a development process. It was first identified by Ernst Haeckel in 1866 and has remained less well studied than heterochrony.
The concept of heterotopy, bringing evolution about by a change in the spatial arrangement of some process within the embryo , was introduced by the German zoologist Ernst Haeckel in 1866. He gave as an example a change in the positioning of the germ layer which created the gonads . Since then, heterotopy has been studied less than its companion, heterochrony which results in more readily observable phenomena like neoteny . With the arrival of evolutionary developmental biology in the late 20th century, heterotopy has been identified in changes in growth rate; in the distribution of proteins in the embryo; the creation of the vertebrate jaw ; the repositioning of the mouth of nematode worms, and of the anus of irregular sea urchins . Heterotopy can create new morphologies in the embryo and hence in the adult, helping to explain how evolution shapes bodies. [ 1 ] [ 2 ] [ 3 ]
In terms of evolutionary developmental biology, heterotopy means the positioning of a developmental process at any level in an embryo, whether at the level of the gene , a circuit of genes, a body structure, or an organ . It often involves homeosis , the evolutionary change of one organ into another. Heterotopy is achieved by the rewiring of an organism's genome , and can accordingly create rapid evolutionary change. [ 2 ] [ 4 ]
The evolutionary biologist Brian K. Hall argues that heterochrony offers such a simple and readily understood mechanism for reshaping bodies that heterotopy has likely often been overlooked. Since starting or stopping a process earlier or later, or changing its rate, can clearly cause a wide variety of changes in body shape and size ( allometry ), biologists have in Hall's view often invoked heterochrony to the exclusion of heterotopy. [ 5 ]
In botany examples of heterotopy include the transfer of bright flower pigments from ancestral petals to leaves that curl and form to mimic petals. In other cases experiments have yielded plants with mature leaves present on the highest shoots. Normal leaf development progresses from the base of the plant to the top: as the plant grows upwards it produces new leaves and lower leaves mature.
One textbook example of heterotopy in animals, a classic in genetics and developmental biology , is the experimental induction of legs in place of antennae in fruit flies, Drosophila . The name for this specific induction is 'antennapedia'. Surprisingly and elegantly, the transfer takes place in the experiment with no other strange pleiotropic consequences. The leg is transplanted and still is able to rotate on the turret-like complex on the fruit fly's head. The leg simply replaced the Antennae. Before this experiment it was thought that anatomical structures were somehow constrained into certain not well understood and undefined domains. Yet the relatively simple modification took place and caused a dramatic change in phenotype .
This further demonstrated that structures that were thought to be homologous at one time and were later modified still retained some modularity , or were interchangeable even millions of years after evolution had sent antennae down a separate path than the other appendages. This is due to the common origin of homeotic genes . Another well-known example is the environmentally induced heterotopic change seen in the melanin of the Himalayan rabbit and the Siamese cat and related breeds. In the Himalayan rabbit pigments in fur and skin are only expressed in the most distal portions, the very ends of limbs. This is similar to the case Siamese cats. In both the placement of fur pigmentation is induced by temperature. The regions furthest from core body heat and with the lowest circulation develop darker as an induced result. Individuals raised at a uniform external temperature above 30 °C do not express melanin in the extremities and as a result the fur on their paws is left white. The specific gene complex determined to be responsible is in the melanin expression series that is also responsible for albinism . This change is not heritable because it is a flexible or Plastic phenotypic change. The heterotopy demonstrated is that colder body regions are marked by expression of melanin.
The Himalayan rabbit and the Siamese cat are examples of artificial selection on heterotopy, developed by breeders incidentally long before the concept was understood. The current theory is that people selected for stereotypical phenotypic patterns (dark extremities) that happened to be repeatedly produced given a typical temperature. This is perhaps the only known example of convergent mechanisms in artificial selection. The common human breeding cultures that breed the rabbits and cats tended to themselves favor the pattern, in a way closely mimicking the way that the underlying genetics that form flexible adaptations can be selected for based on the phenotype they typically produce in an assumed environment in natural selection .
Another example may have happened in the early history of domesticating horses: tail-type hair grew instead of the wild-type short stiff hair still present in the manes of other equids such as donkeys and zebras. | https://en.wikipedia.org/wiki/Heterotopy |
A heterotroph ( / ˈ h ɛ t ər ə ˌ t r oʊ f , - ˌ t r ɒ f / ; [ 1 ] [ 2 ] from Ancient Greek ἕτερος ( héteros ) ' other ' and τροφή ( trophḗ ) ' nutrition ' ) is an organism that cannot produce its own food, instead taking nutrition from other sources of organic carbon , mainly plant or animal matter. In the food chain, heterotrophs are primary, secondary and tertiary consumers, but not producers. [ 3 ] [ 4 ] Living organisms that are heterotrophic include all animals and fungi , some bacteria and protists , [ 5 ] and many parasitic plants . The term heterotroph arose in microbiology in 1946 as part of a classification of microorganisms based on their type of nutrition . [ 6 ] The term is now used in many fields, such as ecology , in describing the food chain . Heterotrophs occupy the second and third trophic levels of the food chain while autotrophs occupy the first trophic level. [ 7 ]
Heterotrophs may be subdivided according to their energy source. If the heterotroph uses chemical energy, it is a chemoheterotroph (e.g., humans and mushrooms). If it uses light for energy, then it is a photoheterotroph (e.g., green non-sulfur bacteria ).
Heterotrophs represent one of the two mechanisms of nutrition ( trophic levels ), the other being autotrophs ( auto = self, troph = nutrition). Autotrophs use energy from sunlight ( photoautotrophs ) or oxidation of inorganic compounds ( lithoautotrophs ) to convert inorganic carbon dioxide to organic carbon compounds and energy to sustain their life. Comparing the two in basic terms, heterotrophs (such as animals) eat either autotrophs (such as plants) or other heterotrophs, or both.
Detritivores are heterotrophs which obtain nutrients by consuming detritus (decomposing plant and animal parts as well as feces ). [ 8 ] Saprotrophs (also called lysotrophs) are chemoheterotrophs that use extracellular digestion in processing decayed organic matter. The process is most often facilitated through the active transport of such materials through endocytosis within the internal mycelium and its constituent hyphae . [ 9 ]
Heterotrophs can be organotrophs or lithotrophs . Organotrophs exploit reduced carbon compounds as electron sources, like carbohydrates , fats , and proteins from plants and animals. On the other hand, lithoheterotrophs use inorganic compounds, such as ammonium , nitrite , or sulfur , to obtain electrons. Another way of classifying different heterotrophs is by assigning them as chemotrophs or phototrophs . Phototrophs utilize light to obtain energy and carry out metabolic processes, whereas chemotrophs use the energy obtained by the oxidation of chemicals from their environment. [ 10 ]
Photoorganoheterotrophs, such as Rhodospirillaceae and purple non-sulfur bacteria synthesize organic compounds using sunlight coupled with oxidation of organic substances.
They use organic compounds to build structures. They do not fix carbon dioxide and apparently do not have the Calvin cycle . [ 11 ] Chemolithoheterotrophs like Oceanithermus profundus [ 12 ] obtain energy from the oxidation of inorganic compounds, including hydrogen sulfide , elemental sulfur , thiosulfate , and molecular hydrogen . Mixotrophs (or facultative chemolithotroph) can use either carbon dioxide or organic carbon as the carbon source, meaning that mixotrophs have the ability to use both heterotrophic and autotrophic methods. [ 13 ] [ 14 ] Although mixotrophs have the ability to grow under both heterotrophic and autotrophic conditions, C. vulgaris have higher biomass and lipid productivity when growing under heterotrophic compared to autotrophic conditions. [ 15 ]
Heterotrophs, by consuming reduced carbon compounds, are able to use all the energy that they obtain from food for growth and reproduction, unlike autotrophs, which must use some of their energy for carbon fixation. [ 11 ] Both heterotrophs and autotrophs alike are usually dependent on the metabolic activities of other organisms for nutrients other than carbon, including nitrogen, phosphorus, and sulfur, and can die from lack of food that supplies these nutrients. [ 16 ] This applies not only to animals and fungi but also to bacteria. [ 11 ]
The chemical origin of life hypothesis suggests that life originated in a prebiotic soup with heterotrophs. [ 17 ] The summary of this theory is as follows: early Earth had a highly reducing atmosphere and energy sources such as electrical energy in the form of lightning, which resulted in reactions that formed simple organic compounds , which further reacted to form more complex compounds and eventually resulted in life. [ 18 ] [ 19 ] Alternative theories of an autotrophic origin of life contradict this theory. [ 20 ]
The theory of a chemical origin of life beginning with heterotrophic life was first proposed in 1924 by Alexander Ivanovich Oparin , and eventually published "The Origin of Life." [ 21 ] It was independently proposed for the first time in English in 1929 by John Burdon Sanderson Haldane . [ 22 ] While these authors agreed on the gasses present and the progression of events to a point, Oparin championed a progressive complexity of organic matter prior to the formation of cells, while Haldane had more considerations about the concept of genes as units of heredity and the possibility of light playing a role in chemical synthesis ( autotrophy ). [ 23 ]
Evidence grew to support this theory in 1953, when Stanley Miller conducted an experiment in which he added gasses that were thought to be present on early Earth – water (H 2 O), methane (CH 4 ), ammonia (NH 3 ), and hydrogen (H 2 ) – to a flask and stimulated them with electricity that resembled lightning present on early Earth. [ 24 ] The experiment resulted in the discovery that early Earth conditions were supportive of the production of amino acids, with recent re-analyses of the data recognizing that over 40 different amino acids were produced, including several not currently used by life. [ 17 ] This experiment heralded the beginning of the field of synthetic prebiotic chemistry, and is now known as the Miller–Urey experiment . [ 25 ]
On early Earth, oceans and shallow waters were rich with organic molecules that could have been used by primitive heterotrophs. [ 26 ] This method of obtaining energy was energetically favorable until organic carbon became more scarce than inorganic carbon, providing a potential evolutionary pressure to become autotrophic. [ 26 ] [ 27 ] Following the evolution of autotrophs, heterotrophs were able to utilize them as a food source instead of relying on the limited nutrients found in their environment. [ 28 ] Eventually, autotrophic and heterotrophic cells were engulfed by these early heterotrophs and formed a symbiotic relationship. [ 28 ] The endosymbiosis of autotrophic cells is suggested to have evolved into the chloroplasts while the endosymbiosis of smaller heterotrophs developed into the mitochondria , allowing the differentiation of tissues and development into multicellularity. This advancement allowed the further diversification of heterotrophs. [ 28 ] Today, many heterotrophs and autotrophs also utilize mutualistic relationships that provide needed resources to both organisms. [ 29 ] One example of this is the mutualism between corals and algae, where the former provides protection and necessary compounds for photosynthesis while the latter provides oxygen. [ 30 ]
However this hypothesis is controversial as CO 2 was the main carbon source at the early Earth, suggesting that early cellular life were autotrophs that relied upon inorganic substrates as an energy source and lived at alkaline hydrothermal vents or acidic geothermal ponds. [ 31 ] Simple biomolecules transported from space was considered to have been either too reduced to have been fermented or too heterogeneous to support microbial growth. [ 32 ] Heterotrophic microbes likely originated at low H 2 partial pressures. Bases, amino acids, and ribose are considered to be the first fermentation substrates. [ 33 ]
Heterotrophs are currently found in each domain of life: Bacteria , Archaea , and Eukarya . [ 34 ] Domain Bacteria includes a variety of metabolic activity including photoheterotrophs, chemoheterotrophs, organotrophs, and heterolithotrophs. [ 34 ] Within Domain Eukarya, kingdoms Fungi and Animalia are entirely heterotrophic, though most fungi absorb nutrients through their environment. [ 35 ] [ 36 ] Most organisms within Kingdom Protista are heterotrophic while Kingdom Plantae is almost entirely autotrophic, except for myco-heterotrophic plants. [ 35 ] Lastly, Domain Archaea varies immensely in metabolic functions and contains many methods of heterotrophy. [ 34 ]
Many heterotrophs are chemoorganoheterotrophs that use organic carbon (e.g. glucose) as their carbon source, and organic chemicals (e.g. carbohydrates, lipids, proteins) as their electron sources. [ 37 ] Heterotrophs function as consumers in food chain : they obtain these nutrients from saprotrophic , parasitic , or holozoic nutrients . [ 38 ] They break down complex organic compounds (e.g., carbohydrates, fats, and proteins) produced by autotrophs into simpler compounds (e.g., carbohydrates into glucose , fats into fatty acids and glycerol , and proteins into amino acids ). They release the chemical energy of nutrient molecules by oxidizing carbon and hydrogen atoms from carbohydrates, lipids, and proteins to carbon dioxide and water, respectively.
They can catabolize organic compounds by respiration, fermentation, or both. Fermenting heterotrophs are either facultative or obligate anaerobes that carry out fermentation in low oxygen environments, in which the production of ATP is commonly coupled with substrate-level phosphorylation and the production of end products (e.g. alcohol, CO 2 , sulfide). [ 39 ] These products can then serve as the substrates for other bacteria in the anaerobic digest , and be converted into CO 2 and CH 4 , which is an important step for the carbon cycle for removing organic fermentation products from anaerobic environments. [ 39 ] Heterotrophs can undergo respiration , in which ATP production is coupled with oxidative phosphorylation . [ 39 ] [ 40 ] This leads to the release of oxidized carbon wastes such as CO 2 and reduced wastes like H 2 O, H 2 S, or N 2 O into the atmosphere. Heterotrophic microbes' respiration and fermentation account for a large portion of the release of CO 2 into the atmosphere, making it available for autotrophs as a source of nutrient and plants as a cellulose synthesis substrate. [ 41 ] [ 40 ]
Respiration in heterotrophs is often accompanied by mineralization , the process of converting organic compounds to inorganic forms. [ 41 ] When the organic nutrient source taken in by the heterotroph contains essential elements such as N, S, P in addition to C, H, and O, they are often removed first to proceed with the oxidation of organic nutrient and production of ATP via respiration. [ 41 ] S and N in organic carbon source are transformed into H 2 S and NH 4 + through desulfurylation and deamination , respectively. [ 41 ] [ 40 ] Heterotrophs also allow for dephosphorylation as part of decomposition . [ 40 ] The conversion of N and S from organic form to inorganic form is a critical part of the nitrogen and sulfur cycle . H 2 S formed from desulfurylation is further oxidized by lithotrophs and phototrophs while NH 4 + formed from deamination is further oxidized by lithotrophs to the forms available to plants. [ 41 ] [ 40 ] Heterotrophs' ability to mineralize essential elements is critical to plant survival. [ 40 ]
Most opisthokonts and prokaryotes are heterotrophic; in particular, all animals and fungi are heterotrophs. [ 5 ] Some animals, such as corals , form symbiotic relationships with autotrophs and obtain organic carbon in this way. Furthermore, some parasitic plants have also turned fully or partially heterotrophic, while carnivorous plants consume animals to augment their nitrogen supply while remaining autotrophic.
Animals are classified as heterotrophs by ingestion, fungi are classified as heterotrophs by absorption.
Heterotrophs, organisms that obtain energy and carbon by consuming organic matter, are vital parts of Earth's biogeochemical cycles particularly in the carbon, nitrogen, and sulfur cycles. Their metabolic activities impact the processing and cycling of elements through ecosystems and the biosphere.
Heterotrophs are key players in the carbon cycle, acting as both consumers and decomposers. They release carbon dioxide (CO2) into the atmosphere through respiration, contributing to a large portion of carbon dioxide emissions. [ 42 ] This process makes carbon available for autotrophs, who can fix carbon through photosynthesis or chemosynthesis. This circulation supports the continuous cycling of carbon between organic and inorganic forms. [ 43 ]
Heterotrophic organisms contribute to key processes in the nitrogen cycle like ammonification, the conversion of organic nitrogen to ammonia, and denitrification, the reduction of nitrate and the release of nitrogen gas to the atmosphere. [ 44 ] These processes can be known as secondary metabolism in heterotrophs. [ 45 ] Heterotrophic microorganisms are essential in the mineralization of organic compounds containing nitrogen. [ 46 ] [ 47 ] Through deamination, they convert organic nitrogen to ammonium (NH4+), which can be further oxidized by lithotrophs into forms available to plants. Similarly, desulfurylation by heterotrophs transforms organic sulfur into hydrogen sulfide (H2S), which is then oxidized by lithotrophs and phototrophs, contributing to the sulfur cycle.
The ability of heterotrophs to break down complex organic compounds is fundamental to nutrient cycling in ecosystems. [ 48 ] By decomposing dead organic matter, they release essential elements like phosphorus through dephosphorylation, making these nutrients available for other organisms. [ 49 ] This process is critical for maintaining soil fertility and supporting plant growth. Heterotrops connect the flow of energy and organic matter across ecosystems. Their biological processes link with atmospheric, chemical and geological systems. [ 50 ]
Heterotrophs form intricate relationships with autotrophs in ecosystems. While they depend on autotrophs for energy-rich organic compounds, heterotrophs support autotrophic growth by releasing minerals and carbon dioxide (CO2). This interdependence is exemplified in symbiotic relationships, such as those between corals and algae, where nutrient exchange benefits both partners. Their metabolic processes depend on each other and traces of organic compounds. [ 51 ]
The biogeochemical activities of heterotrophs are thus integral to ecosystem functioning, influencing the availability of nutrients, the composition of the atmosphere, and the productivity of both terrestrial and aquatic environments.
Heterotrophs, organisms that obtain energy and carbon by consuming organic matter, are vital parts of Earth's biogeochemical cycles particularly in the carbon, nitrogen, and sulfur cycles. Their metabolic activities impact the processing and cycling of elements through ecosystems and the biosphere.
Heterotrophs are key players in the carbon cycle, acting as both consumers and decomposers. They release carbon dioxide (CO2) into the atmosphere through respiration, contributing to a large portion of carbon dioxide emissions. This process makes carbon available for autotrophs, who can fix carbon through photosynthesis or chemosynthesis. This circulation supports the continuous cycling of carbon between organic and inorganic forms. [ 52 ]
Heterotrophic organisms contribute to key processes in the nitrogen cycle like ammonification, the conversion of organic nitrogen to ammonia, and denitrification, the reduction of nitrate and the release of nitrogen gas to the atmosphere. [ 53 ] Heterotrophic microorganisms are essential in the mineralization of organic compounds containing nitrogen. [ 54 ] Through deamination, they convert organic nitrogen to ammonium (NH4+), which can be further oxidized by lithotrophs into forms available to plants. Similarly, desulfurylation by heterotrophs transforms organic sulfur into hydrogen sulfide (H2S), which is then oxidized by lithotrophs and phototrophs, contributing to the sulfur cycle.
The ability of heterotrophs to break down complex organic compounds is fundamental to nutrient cycling in ecosystems. By decomposing dead organic matter, they release essential elements like phosphorus through dephosphorylation, making these nutrients available for other organisms. This process is critical for maintaining soil fertility and supporting plant growth. Heterotrops connect the flow of energy and organic matter across ecosystems. Their biological processes link with atmospheric, chemical and geological systems. [ 55 ]
Heterotrophs form intricate relationships with autotrophs in ecosystems. While they depend on autotrophs for energy-rich organic compounds, heterotrophs support autotrophic growth by releasing minerals and carbon dioxide (CO2). This interdependence is exemplified in symbiotic relationships, such as those between corals and algae, where nutrient exchange benefits both partners. [ 56 ]
The biogeochemical activities of heterotrophs are thus integral to ecosystem functioning, influencing the availability of nutrients, the composition of the atmosphere, and the productivity of both terrestrial and aquatic environments. | https://en.wikipedia.org/wiki/Heterotroph |
Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants . Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals , certain types of fungi , and non-photosynthesizing plants are heterotrophic . In contrast, green plants , red algae , brown algae , and cyanobacteria are all autotrophs , which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition .
All heterotrophs (except blood and gut parasites ) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition. | https://en.wikipedia.org/wiki/Heterotrophic_nutrition |
The Hetherington Prize has been awarded once a year since 1991 at Oxford University for the best doctoral thesis presentation in the Department of Materials. The first ever prize (1991) was awarded to Prof. Kwang-Leong Choy ( D.Phil. , DSc , FIMMM , FRSC , CSci ), who went on to become the Director of the Institute for Materials Discovery at University College London and Fellow of the Royal Society of Canada.
The award is almost exclusively awarded to only one doctoral candidate per year, but in two years it was shared (in 2011 to Nike Dattani and Lewys Jones, and in 2015 to Nina Klein, Aaron Lau, and Joe O'Gorman). | https://en.wikipedia.org/wiki/Hetherington_Prize |
The heuristic-systematic model of information processing ( HSM ) is a widely recognized [ citation needed ] model by Shelly Chaiken that attempts to explain how people receive and process persuasive messages. [ 1 ]
The model states that individuals can process messages in one of two ways: heuristically or systematically. Systematic processing entails careful and deliberative processing of a message, while heuristic processing entails the use of simplifying decision rules or 'heuristics' to quickly assess the message content. The guiding belief with this model is that individuals are more apt to minimize their use of cognitive resources (i.e., to rely on heuristics), thus affecting the intake and processing of messages. [ 2 ]
HSM predicts that processing type will influence the extent to which a person is persuaded or exhibits lasting attitude change. HSM is quite similar to the elaboration likelihood model , or ELM. Both models were predominantly developed in the early- to mid-1980s and share many of the same concepts and ideas. [ 3 ]
Early research investigating how people process persuasive messaging focused mainly on cognitive theories and the way the mind processed each element of a message. One of the early guiding principles of underlying motivations of persuasive communications came from Leon Festinger ’s (1950) statement that incorrect or improper attitudes are generally maladaptive and can have deleterious behavioral and affective consequences.
In 1953, Hovland , Janis , and Kelley noted that a sense of "rightness" accompanies holding opinions similar to the opinions of others. In 1987, Holtz and Miller reaffirmed this line of thought by noting, "When other people are perceived to hold similar attitudes, one's confidence in the validity of one's own attitude is increased." [ 4 ]
Another concept that contributed to the HSM was the sufficiency principle . This principle reflected widespread notions that people use limited cognitive resources, or use an "economy-minded" approach to information processing when presented with persuasive information. Based on this thought, early assumptions said people were at least partially guided by the " principle of least effort ". This principle stated that in the interest of economy, the mind would often process with the least amount of effort (i.e., use a heuristic), and for more detailed information processing would use more effortful (systematic) processing. This was the major difference when compared with the ELM, which described the two different ways information was processed, through central and/or peripheral processing. [ 5 ]
The developer and main researcher of the HSM was Shelly Chaiken. Under her direction, the HSM has undergone several major revisions. As she noted in 1980 and 1987, the model specified the two modes of heuristic and systematic processing. Then, Chaiken et al. noted in 1989 that the model was extended to specify the psychological conditions for triggering the modes of processing in terms of the discrepancy between actual and desired subjective confidence. In 1986, Chaiken, and others, updated the model to include underlying motivations . [ 6 ]
Heuristic processing uses judgmental rules known as knowledge structures that are learned and stored in memory . [ 7 ] The heuristic approach offers an economic advantage by requiring minimal cognitive effort on the part of the recipient. [ 1 ] Heuristic processing is related to the concept of " satisficing ." [ 8 ]
Heuristic processing is governed by availability, accessibility, and applicability. Availability refers to the knowledge structure, or heuristic, being stored in memory for future use. Accessibility of the heuristic applies to the ability to retrieve the memory for use. Applicability of the heuristic refers to the relevancy of the memory to the judgmental task. [ 7 ] Due to the use of knowledge structures, a person using heuristic information processing is likely to agree with messages delivered by experts, or messages that are endorsed by others, without fully processing the semantic content of the message. [ 9 ] In comparison to systematic processing, heuristic processing entails judging the validity of messages by relying more on accessible context information, such as the identity of the source or other non-content cues. Thus, heuristic views de-emphasize detailed information evaluation and focus on the role of simple rules or cognitive heuristics in mediating persuasion. [ 1 ] [ 10 ]
Individuals may be more likely to use heuristic processing when an issue is less personally important to them (they have low “issue involvement”) or when they believe their judgment will not have significant impacts on themselves (low “response involvement”). [ 1 ]
Systematic processing involves comprehensive and analytic, cognitive processing of judgment-relevant information. [ 7 ] The systematic approach values source reliability and message content, which may exert stronger impact on persuasion, when determining message validity. [ 1 ] Judgments developed from systematic processing rely heavily on in-depth treatment of judgment-relevant information and respond accordingly to the semantic content of the message. [ 7 ] Recipients developing attitudes from a systematic basis exert considerable cognitive effort and actively attempt to comprehend and evaluate the message's arguments. When processing systematically, recipients also attempt to assess their validity as it relates to the message's conclusion. Systematic views of persuasion emphasize detailed processing of message content and the role of message-based cognitions in mediating opinion change. While recipients utilizing systematic processing rely heavily on message content, source characteristics and other non-content may supplement the recipients’ assessment of validity in the persuasive message. [ 1 ]
Both heuristic and systematic processes may occur independently. It is also possible for both to occur simultaneously in an additive fashion or in a way that the judgmental implications of one process lend a bias nature to the other. [ 7 ] The heuristic-systematic model includes the hypothesis that attitudes developed or changed by utilizing heuristic processing alone will likely be less stable, less resistant to counterarguments , and will be less predictive of subsequent behavior than attitudes developed or changed utilizing systematic processing. [ 1 ]
Message recipients using heuristic processing may sometimes choose to accept message conclusions they would otherwise have rejected, or vice versa, had they invested more time and effort to scrutinize the message. [ 1 ]
Source credibility affects persuasion under conditions of low, but not high, issue-involvement and response-involvement. [ 1 ]
When economic concerns are predominant, the recipient will likely use heuristic processing to form a judgement about the persuasive argument. Conversely, when reliability concerns are predominant (i.e., recipients perceive significant importance in accurately judging an argument), they will likely use a systematic processing strategy. Reliability concerns are influenced by the level of the recipient's issue-involvement or response-involvement. When the recipient views their judgment as being less consequential, they will likely place greater value on economic concerns than reliability concerns.
Research into information processing, especially in persuasive messaging, can be applied in advertising . For instance, HSM has been used in Internet webpage considerations.
In a 2002 study by Wathen & Burkell, [ 11 ] they proposed a theory that separated the evaluation process into distinct segments. In the theory, the process began with low-effort examinations of peripheral cues (e.g., appearance, design , organization, and source reputation) then continued to a more high-effort analysis of the content of the information source. The proposed research also drew on social psychological theories of dual-processing , which stated that information processing outcomes were the result of interaction between a fast, associative information-processing mode based on low-effort heuristics, and a slow, rule-based information processing mode based on high-effort systematic reasoning. Wathen and Burkell proposed (but did not test) that if an individual determines that an online source does not meet an appropriate level of credibility at any one stage, then he or she will leave the site without further evaluation. They theorized that this “easy to discard” behavior was indicative of information-rich environments, where the assumption is that many other potential sources of information exist, and spending too much time on any one source is potentially wasteful. [ 11 ]
The HSM has also been applied in medical decision-making contexts. A 2004 study by Suzanne K. Steginga , PhD, and Stefano Occhipinti, PhD, Queensland Cancer Fund and the School of Applied Psychology at Griffith University investigated the utility of the heuristic-systematic processing model as a framework for the investigation of patient decision making. A total of 111 men diagnosed with localized prostate cancer were assessed using verbal protocol analysis and self-report measures. The results showed: "Most men (68%) preferred that decision making be shared equally between them and their doctor. Men's use of the expert opinion heuristic was related to men's verbal reports of decisional uncertainty and having a positive orientation to their doctor and medical care; a desire for greater involvement in decision making was predicted by a high internal locus of health control. Trends were observed for systematic information processing to increase when the heuristic strategy used was negatively affect -laden and when men were uncertain about the probabilities for cure and side effects. There was a trend for decreased systematic processing when the expert opinion heuristic was used. Findings were consistent with the heuristic-systematic processing model and suggest that this model has utility for future research in applied decision making about health issues. [ 12 ]
Originally the heuristic-systematic model was developed to apply to "validity seeking" persuasion contexts in which peoples' primary motivation is to attain accurate attitudes that align with relevant facts . [ 1 ] [ 9 ] Chaiken assumes that the primary processing goal of accuracy-motivated recipients is to assess the validity of persuasive messages, and that both heuristic and systematic processing can serve this objective. [ 9 ] Other motives beyond the validity-seeking persuasion context were identified by Chaiken and colleagues (1989) who proposed an expanded model that posits two additional motives that heuristic and systematic processing can serve: defense-motivation and impression-motivation.
Contrary to previous viewpoints, the heuristic-systematic model and the elaboration likelihood model should be treated as complementary models to create a dual-processing framework for use in future research for understanding a variety of social influence phenomena. [ 9 ]
A major criticism of HSM is that the model closely relates to ELM , which is also a dual-processing model discussing two main paths to persuasion. The ELM discusses the two routes as "central" route processing and "peripheral" route processing. ELM's central processing has been likened to systematic processing in HSM, while peripheral processing is similar to HSM's heuristic processing. These two routes of processing define related theories behind attitude change.
In ELM, the central route is reflective and requires a willingness to process and think about the message. The peripheral route occurs when attitudes are formed without extensive thought, but more from mental shortcuts, credibility, and appearance cues. The route of persuasion processing depends on the level of involvement in the topic or issue. High involvement or elaboration increases central route processing especially when motivation and ability in the message exists. Therefore, low involvement increases peripheral route processing when motivation and ability conditions of persuasion do not exist. However, if the topic or idea is irrelevant to the individual, then the message takes the peripheral route. [ 13 ]
HSM specifically examines validity seeking persuasion settings concerning people's motivations within the social environment. [ 9 ] The limitation of HSM exists in the inability to define the specific motivations of persuasion, which is why Chaiken expanded HSM to illustrate that heuristic and systematic processing can "serve defense-motivation, the desire to form or defend particular attitudinal positions, and impression- motivation, the desire to form or hold socially acceptable attitudinal positions" (p. 326). [ 9 ]
Major assumptions exist with both HSM and ELM, which is why both models have generated debate and are often misconstrued. Systematic processing assumes that persuasion has occurred via the recipient's understanding and cognitive elaboration of the persuasive argument. [ 9 ] In addition, researchers hypothesize that systematic processing requires and uses cognitive capacity, while heuristic processing makes low cognitive demands. [ 9 ] Furthermore, both HSM and ELM assume that "capacity and motivation are important determinants of systematic process" which results in biased modes of processing (p. 327). [ 9 ] With heuristic processing, there is less need to process information and cognitively in comparison to systematic processing. Heuristic processing occurs when people simply form immediate decisions and conclusions based on the information available versus analytical processing of information given that obviously requires more cognition. Heuristic processing as defined by HSM, illustrates that people can formulate decisions utilizing basic rules such as "experts' statements can be trusted" and "consensus implies correctness" to establish validity within messages (p. 327). [ 9 ] Therefore, individuals who process messages through heuristic processing routes of persuasion, likely formulate decisions based on experts’ opinion and what the consensus believes opposed to fully processing the message in its entirety.
This leads to another similarity between HSM and ELM, as attitudes and opinions developed through heuristic processing will tend to be "less stable, less resistant to counter-propaganda, and less predictive of behavior" in comparison to attitudes and opinions formed through detailed information within systematic processing (p. 327). [ 9 ]
HSM postulates that heuristic and systematic processing can each influence both "independent" and "interdependent" effects on decision making by occurring simultaneously (p. 328). [ 9 ] Unlike HSM, ELM does not postulate whether central route processing and peripheral route processing can co-occur or not. Another assumption by Chaiken and her colleagues is that systematic processing does in fact provide people with more judgment relevant information in comparison to heuristic processing of information, which does not account for any weaknesses in expert subject matter material. [ 9 ] Therefore, while systematic processing may be prevalent within many social environments, HSM, unlike its model counterpart ELM, does illustrate "the possibility that heuristic processing can exert a significant and independent influence on persuasion" (p 329). [ 9 ] | https://en.wikipedia.org/wiki/Heuristic-systematic_model_of_information_processing |
HeuristicLab [ 1 ] [ 2 ] is a software environment for heuristic and evolutionary algorithms , developed by members of the Heuristic and Evolutionary Algorithm Laboratory (HEAL) at the University of Applied Sciences Upper Austria , in Hagenberg im Mühlkreis .
HeuristicLab has a strong focus on providing a graphical user interface so that users are not required to have comprehensive programming skills to adjust and extend the algorithms for a particular problem. In HeuristicLab algorithms are represented as operator graphs and changing or rearranging operators can be done by drag-and-drop without actually writing code. The software thereby tries to shift algorithm development capability from the software engineer to the user and practitioner. Developers can still extend the functionality on code level and can use HeuristicLab's plug-in mechanism that allows them to integrate custom algorithms, solution representations or optimization problems.
Development on HeuristicLab was started in 2002 by Stefan Wagner and Michael Affenzeller. The main motivation for the development of HeuristicLab was to build a paradigm-independent, flexible, extensible, and comfortable environment for heuristic optimization on top of a state-of-the-art programming environment and by using modern programming concepts. As the Microsoft .NET framework seemed to fulfill this requirements it was chosen as the development environment and C# as programming language.
The first officially available version of HeuristicLab was 1.0 released in 2004 with an improved version 1.1 released in 2005. Development on the next version of HeuristicLab started in the same year. Among many things it was decided that HeuristicLab 2.0 should provide an entirely new user experience and lift the burden of programming off of the user. Therefore, HeuristicLab 2.0 was the first version featuring graphical tools for creating algorithms, however due to the complexity of the user interface HeuristicLab 2.0 was never released to the public. In the summer of 2007 it was decided that a new iteration of HeuristicLab was needed which should combine the usability of version 1.1 with the algorithm modeling concepts of version 2.0. HeuristicLab 3.0 was released internally in the beginning of 2008. In the next 2 years HeuristicLab was gradually improved which led to the release of version 3.3 in summer 2010 as open source software .
The following list gives an overview of the algorithms supported by HeuristicLab:
The following list gives an overview of the problems supported by HeuristicLab: | https://en.wikipedia.org/wiki/HeuristicLab |
In mathematical optimization and computer science , heuristic (from Greek εὑρίσκω "I find, discover" [ 1 ] ) is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution in a search space . This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.
A heuristic function , also simply called a heuristic , is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution. [ 2 ]
The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time.
Heuristics may produce results by themselves, or they may be used in conjunction with optimization algorithms to improve their efficiency (e.g., they may be used to generate good seed values).
Results about NP-hardness in theoretical computer science make heuristics the only viable option for a variety of complex optimization problems that need to be routinely solved in real-world applications.
Heuristics underlie the whole field of Artificial Intelligence and the computer simulation of thinking, as they may be used in situations where there are no known algorithms . [ 3 ]
One way of achieving the computational performance gain expected of a heuristic consists of solving a simpler problem whose solution is also a solution to the initial problem.
An example of approximation is described by Jon Bentley for solving the travelling salesman problem (TSP):
so as to select the order to draw using a pen plotter . TSP is known to be NP-hard so an optimal solution for even a moderate size problem is difficult to solve. Instead, the greedy algorithm can be used to give a good but not optimal solution (it is an approximation to the optimal answer) in a reasonably short amount of time. The greedy algorithm heuristic says to pick whatever is currently the best next step regardless of whether that prevents (or even makes impossible) good steps later. It is a heuristic in the sense that practice indicates it is a good enough solution, while theory indicates that there are better solutions (and even indicates how much better, in some cases). [ 4 ]
Another example of heuristic making an algorithm faster occurs in certain search problems. Initially, the heuristic tries every possibility at each step, like the full-space search algorithm. But it can stop the search at any time if the current possibility is already worse than the best solution already found. In such search problems, a heuristic can be used to try good choices first so that bad paths can be eliminated early (see alpha–beta pruning ). In the case of best-first search algorithms, such as A* search , the heuristic improves the algorithm's convergence while maintaining its correctness as long as the heuristic is admissible .
In their Turing Award acceptance speech, Allen Newell and Herbert A. Simon discuss the heuristic search hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. Each following step depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current step is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution.
A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions. [ 5 ]
Antivirus software often uses heuristic rules for detecting viruses and other forms of malware . Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected. The most advanced part of behavior-based heuristic scanning is that it can work against highly randomized self-modifying/mutating ( polymorphic ) viruses that cannot be easily detected by simpler string scanning methods. Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner's users.
Some heuristics have a strong underlying theory; they are either derived in a top-down manner from the theory or are arrived at based on either experimental or real world data. Others are just rules of thumb based on real-world observation or experience without even a glimpse of theory. The latter are exposed to a larger number of pitfalls.
When a heuristic is reused in various contexts because it has been seen to "work" in one context, without having been mathematically proven to meet a given set of requirements, it is possible that the current data set does not necessarily represent future data sets (see: overfitting ) and that purported "solutions" turn out to be akin to noise.
Statistical analysis can be conducted when employing heuristics to estimate the probability of incorrect outcomes. To use a heuristic for solving a search problem or a knapsack problem , it is necessary to check that the heuristic is admissible . Given a heuristic function h ( v i , v g ) {\displaystyle h(v_{i},v_{g})} meant to approximate the true optimal distance d ⋆ ( v i , v g ) {\displaystyle d^{\star }(v_{i},v_{g})} to the goal node v g {\displaystyle v_{g}} in a directed graph G {\displaystyle G} containing n {\displaystyle n} total nodes or vertices labeled v 0 , v 1 , ⋯ , v n {\displaystyle v_{0},v_{1},\cdots ,v_{n}} , "admissible" means roughly that the heuristic underestimates the cost to the goal or formally that h ( v i , v g ) ≤ d ⋆ ( v i , v g ) {\displaystyle h(v_{i},v_{g})\leq d^{\star }(v_{i},v_{g})} for all ( v i , v g ) {\displaystyle (v_{i},v_{g})} where i , g ∈ [ 0 , 1 , . . . , n ] {\displaystyle {i,g}\in [0,1,...,n]} .
If a heuristic is not admissible, it may never find the goal, either by ending up in a dead end of graph G {\displaystyle G} or by skipping back and forth between two nodes v i {\displaystyle v_{i}} and v j {\displaystyle v_{j}} where i , j ≠ g {\displaystyle {i,j}\neq g} .
The word "heuristic" came into usage in the early 19th century. It is formed irregularly from the Greek word heuriskein , meaning "to find". [ 6 ] | https://en.wikipedia.org/wiki/Heuristic_(computer_science) |
In engineering , heuristics are experience-based methods used to reduce the need for calculations pertaining to equipment size, performance, or operating conditions. Heuristics are fallible and do not guarantee a correct solution. It is important to understand their limitations when applying them to different equipment and processes. Though heuristics are limited, they may be of value. This is because they offer time-saving approximations in preliminary process design.
Problem solving methods are intrinsic to forensic engineering methods, where failures are analysed for the root cause or causes. Only when failures have been investigated with conclusive results can remedial action be taken with confidence.
These heuristics were taken from Turton's "Analysis, Synthesis, and Design of Chemical Processes". [ 1 ]
These heuristics were taken from Turton's "Analysis, Synthesis, and Design of Chemical Processes". [ 2 ] | https://en.wikipedia.org/wiki/Heuristic_(engineering) |
Heuristics (from Ancient Greek εὑρίσκω , heurískō , "I find, discover") is the process by which humans use mental shortcuts to arrive at decisions. Heuristics are simple strategies that humans, animals, [ 1 ] [ 2 ] [ 3 ] organizations, [ 4 ] and even machines [ 5 ] use to quickly form judgments , make decisions , and find solutions to complex problems. Often this involves focusing on the most relevant aspects of a problem or situation to formulate a solution. [ 6 ] [ 7 ] [ 8 ] [ 2 ] While heuristic processes are used to find the answers and solutions that are most likely to work or be correct, they are not always right or the most accurate. [ 9 ] Judgments and decisions based on heuristics are simply good enough to satisfy a pressing need in situations of uncertainty, where information is incomplete. [ 10 ] In that sense they can differ from answers given by logic and probability .
The economist and cognitive psychologist Herbert A. Simon introduced the concept of heuristics in the 1950s, suggesting there were limitations to rational decision making. In the 1970s, psychologists Amos Tversky and Daniel Kahneman added to the field with their research on cognitive bias . It was their work that introduced specific heuristic models, a field which has only expanded since. While some argue that pure laziness is behind the heuristics process, this could just be a simplified explanation for why people don't act the way we expected them to. [ 11 ] Other theories argue that it can be more accurate than decisions based on every known factor and consequence, such as the less-is-more effect . [ 12 ]
Herbert A. Simon formulated one of the first models of heuristics, known as satisficing . His more general research program posed the question of how humans make decisions when the conditions for rational choice theory are not met, that is how people decide under uncertainty. [ 13 ] Simon is also known as the father of bounded rationality , which he understood as the study of the match (or mismatch) between heuristics and decision environments. This program was later extended into the study of ecological rationality .
In the early 1970s, psychologists Amos Tversky and Daniel Kahneman took a different approach, linking heuristics to cognitive biases. Their typical experimental setup consisted of a rule of logic or probability, embedded in a verbal description of a judgement problem, and demonstrated that people's intuitive judgement deviated from the rule. The "Linda problem" below gives an example . The deviation is then explained by a heuristic. This research, called the heuristics-and-biases program, challenged the idea that human beings are rational actors and first gained worldwide attention in 1974 with the Science paper "Judgment Under Uncertainty: Heuristics and Biases" [ 14 ] and although the originally proposed heuristics have been refined over time, this research program has changed the field by permanently setting the research questions. [ 15 ]
The original ideas by Herbert Simon were taken up in the 1990s by Gerd Gigerenzer and others. According to their perspective, the study of heuristics requires formal models that allow predictions of behavior to be made ex ante . Their program has three aspects: [ 16 ]
Among others, this program has shown that heuristics can lead to fast, frugal, and accurate decisions in many real-world situations that are characterized by uncertainty. [ 17 ] [ 18 ]
These two different research programs have led to two kinds of models of heuristics, formal models and informal ones. Formal models describe the decision process in terms of an algorithm, which allows for mathematical proofs and computer simulations. In contrast, informal models are verbal descriptions.
List of formal models of heuristics :
Herbert Simon's satisficing heuristic can be used to choose one alternative from a set of alternatives in situations of uncertainty. [ 19 ] Here, uncertainty means that the total set of alternatives and their consequences is not known or knowable. For instance, professional real-estate entrepreneurs rely on satisficing to decide in which location to invest to develop new commercial areas: "If I believe I can get at least x return within y years, then I take the option." [ 20 ] In general, satisficing is defined as:
If no alternative is found, then the aspiration level can be adapted.
Satisficing has been reported across many domains, for instance as a heuristic car dealers use to price used BMWs. [ 21 ]
Unlike satisficing, Amos Tversky 's elimination-by-aspect heuristic can be used when all alternatives are simultaneously available. The decision-maker gradually reduces the number of alternatives by eliminating alternatives that do not meet the aspiration level of a specific attribute (or aspect). [ 22 ] During a series of selections, people tend to experience uncertainty and exhibit inconsistency. Elimination by aspects could be used when facing selections. In general, the process of elimination by aspects is as follows:
Elimination by aspects does not speculate that choosing alternatives could help consumers to maximize utility, on the contrary, it holds that selection is the result of a probabilistic process that gradually eliminates alternatives. [ 22 ] A simple example is given by Amos Tversky : when someone wants to purchase a new car, the first aspect they will take into account might be the automatic transmission, this will eliminate all alternatives that do not contain such an aspect. Then, when all the alternatives that do not have this feature are eliminated, another aspect will be given such as a $3000 price limit. The process of elimination continues to occur until all alternatives are eliminated. [ 22 ]
Elimination by aspects is well used in the early stage of business angels' decision-making process since it facilitates a fast-decision-making tool - alternatives will be eliminated when investors find a critical defect of the potential opportunities. [ 23 ] Another research also demonstrated that elimination by aspects has widely been used in electricity contract choice. [ 24 ] The logic behind these two examples is that elimination by aspects helps to make decisions when facing a series of complicated choices. One may need to make a decision among all alternatives while he or she only has limited intuitive computational facilities and time. However, elimination by aspects as a compensatory model could help to make such complex decisions since it is easier to apply and involves nonnumerical computations. [ 22 ]
The recognition heuristic exploits the basic psychological capacity for recognition in order to make inferences about unknown quantities in the world. For two alternatives, the heuristic is: [ 12 ]
If one of two alternatives is recognized and the other not, then infer that the recognized alternative has the higher value with respect to the criterion.
For example, in the 2003 Wimbledon tennis tournament, Andy Roddick played Tommy Robredo. If one has heard of Roddick but not of Robredo, the recognition heuristic leads to the prediction that Roddick will win. The recognition heuristic exploits partial ignorance, if one has heard of both or no player, a different strategy is needed. Studies of Wimbledon 2003 and 2005 have shown that the recognition heuristic applied by semi-ignorant amateur players predicted the outcomes of all gentlemen single games as well and better than the seedings of the Wimbledon experts (who had heard of all players), as well as the ATP rankings. [ 25 ] [ 26 ] The recognition heuristic is ecologically rational (that is, it predicts well) when the recognition validity is substantially above chance. In the present case, recognition of players' names is highly correlated with their chances of winning. [ 27 ]
The take-the-best heuristic exploits the basic psychological capacity for retrieving cues from memory in the order of their validity. Based on the cue values, it infers which of two alternatives has a higher value on a criterion. [ 28 ] Unlike the recognition heuristic, it requires that all alternatives are recognized, and it thus can be applied when the recognition heuristic cannot. For binary cues (where 1 indicates the higher criterion value), the heuristic is defined as:
The validity v i of a cue i is defined as the proportion of correct decisions c i :
v i = c i / t i
where ti is the number of cases the values of the two alternatives differ on cue i. The validity of each cue can be estimated from samples of observation.
Take-the-best has remarkable properties. In comparison with complex machine learning models, it has been shown that it can often predict better than regression models, [ 29 ] classification-and-regression trees, neural networks, and support vector machines . [Brighton & Gigerenzer, 2015]
Similarly, psychological studies have shown that in situations where take-the-best is ecologically rational, a large proportion of people tend to rely on it. This includes decision making by airport custom officers, [ 30 ] professional burglars and police officers [ 31 ] and student populations. [ 32 ] The conditions under which take-the-best is ecologically rational are mostly known. [ 33 ] Take-the-best shows that the previous view that ignoring part of the information would be generally irrational is incorrect. Less can be more.
A fast-and-frugal tree is a heuristic that allows to make classifications, [ 34 ] such as whether a patient with severe chest pain is likely to have a heart attack or not, [ 35 ] or whether a car approaching a checkpoint is likely to be a terrorist or a civilian. [ 36 ] It is called "fast and frugal" because, just like take-the-best, it allows for quick decisions with only few cues or attributes. It is called a "tree" because it can be represented like a decision tree in which one asks a sequence of questions. Unlike a full decision tree, however, it is an incomplete tree – to save time and reduce the danger of overfitting.
Figure 1 shows a fast-and-frugal tree used for screening for HIV (human immunodeficiency virus). Just like take-the-best, the tree has a search rule, stopping rule, and decision rule:
In the HIV tree, an ELISA (enzyme-linked immunosorbent assay) test is conducted first. If the outcome is negative, then the testing procedure stops and the client is informed of the good news, that is, "no HIV." If, however, the result is positive, a second ELISA test is performed, preferably from a different manufacturer. If the second ELISA is negative, then the procedure stops and the client is informed of having "no HIV." However, if the result is positive, a final test, the Western blot, is conducted.
In general, for n binary cues, a fast-and-frugal tree has exactly n + 1 exits – one for each cue and two for the final cue. A full decision tree, in contrast, requires 2 n exits. The order of cues (tests) in a fast-and-frugal tree is determined by the sensitivity and specificity of the cues, or by other considerations such as the costs of the tests. In the case of the HIV tree, the ELISA is ranked first because it produces fewer misses than the Western blot test, and also is less expensive. The Western blot test, in contrast, produces fewer false alarms. In a full tree, in contrast, order does not matter for the accuracy of the classifications.
Fast-and-frugal trees are descriptive or prescriptive models of decision making under uncertainty. For instance, an analysis or court decisions reported that the best model of how London magistrates make bail decisions is a fast and frugal tree. [ 37 ] The HIV tree is both prescriptive– physicians are taught the procedure – and a descriptive model, that is, most physicians actually follow the procedure.
Tallying is a heuristic that considers the most viable choice in a decision making problem to be the one which outperforms its alternatives across most identifiable measures and criteria. [ 38 ]
As opposed to the take-the-best heuristic which considers a weighted-value when assessing the importance of a specific aspect (cues) involved in a choice, a person who tallies merely considers all available aspects of an alternative choice with equal weight and chooses the option with the most aspects in favour. [ 4 ]
In this sense, tallying differentiates from the take-the-best heuristic as the latter naturally discriminates based on the value applied to each aspect, and therefore can lead to opposing results. [ 39 ]
To represent this, consider a scenario where a prediction is taking place as to whether Team A or Team B may be more successful in the upcoming season of basketball. Team A is superior in 3/4 of the contributing aspects to team success, but the aspect Team B is greater in than Team A is weighted as objectively more important than the others for team success. The tallying heuristic would consider Team A to be more successful due to its outperformance in most measures, however, take-the-best would consider the weighted value of the singular one in which Team B is superior in to determine that Team B would be the most successful.
In their initial research, Tversky and Kahneman proposed three heuristics—availability, representativeness, and anchoring and adjustment. Subsequent work has identified many more. Heuristics that underlie judgment are called "judgment heuristics". Another type, called "evaluation heuristics", are used to judge the desirability of possible choices. [ 40 ]
List of informal models of heuristics :
In psychology, availability is the ease with which a particular idea can be brought to mind. When people estimate how likely or how frequent an event is on the basis of its availability, they are using the availability heuristic. [ 58 ] When an infrequent event can be brought easily and vividly to mind, this heuristic overestimates its likelihood. For example, people overestimate their likelihood of dying in a dramatic event such as a tornado or terrorism . Dramatic, violent deaths are usually more highly publicised and therefore have a higher availability. [ 59 ] On the other hand, common but mundane events are hard to bring to mind, so their likelihoods tend to be underestimated. These include deaths from suicides , strokes , and diabetes . This heuristic is one of the reasons why people are more easily swayed by a single, vivid story than by a large body of statistical evidence. [ 60 ] It may also play a role in the appeal of lotteries : to someone buying a ticket, the well-publicised, jubilant winners are more available than the millions of people who have won nothing. [ 59 ]
When people judge whether more English words begin with T or with K , the availability heuristic gives a quick way to answer the question. Words that begin with T come more readily to mind, and so subjects give a correct answer without counting out large numbers of words. However, this heuristic can also produce errors. When people are asked whether there are more English words with K in the first position or with K in the third position, they use the same process. It is easy to think of words that begin with K , such as kangaroo , kitchen , or kept . It is harder to think of words with K as the third letter, such as lake , or acknowledge , although objectively these are three times more common. This leads people to the incorrect conclusion that K is more common at the start of words. [ 14 ] In another experiment, subjects heard the names of many celebrities, roughly equal numbers of whom were male and female. The subjects were then asked whether the list of names included more men or more women. When the men in the list were more famous, a great majority of subjects incorrectly thought there were more of them, and vice versa for women. Tversky and Kahneman's interpretation of these results is that judgments of proportion are based on availability, which is higher for the names of better-known people. [ 58 ]
In one experiment that occurred before the 1976 U.S. Presidential election , some participants were asked to imagine Gerald Ford winning, while others did the same for a Jimmy Carter victory. Each group subsequently viewed their allocated candidate as significantly more likely to win. The researchers found a similar effect when students imagined a good or a bad season for a college football team. [ 61 ] The effect of imagination on subjective likelihood has been replicated by several other researchers. [ 60 ]
A concept's availability can be affected by how recently and how frequently it has been brought to mind. In one study, subjects were given partial sentences to complete. The words were selected to activate the concept either of hostility or of kindness: a process known as priming . They then had to interpret the behavior of a man described in a short, ambiguous story. Their interpretation was biased towards the emotion they had been primed with: the more priming, the greater the effect. A greater interval between the initial task and the judgment decreased the effect. [ 62 ]
Tversky and Kahneman offered the availability heuristic as an explanation for illusory correlations in which people wrongly judge two events to be associated with each other. They explained that people judge correlation on the basis of the ease of imagining or recalling the two events together. [ 14 ] [ 58 ]
The representativeness heuristic is seen when people use categories, for example when deciding whether or not a person is a criminal. An individual thing has a high representativeness for a category if it is very similar to a prototype of that category. When people categorise things on the basis of representativeness, they are using the representativeness heuristic. "Representative" is here meant in two different senses: the prototype used for comparison is representative of its category, and representativeness is also a relation between that prototype and the thing being categorised. [ 14 ] [ 63 ] While it is effective for some problems, this heuristic involves attending to the particular characteristics of the individual, ignoring how common those categories are in the population (called the base rates ). Thus, people can overestimate the likelihood that something has a very rare property, or underestimate the likelihood of a very common property. This is called the base rate fallacy . Representativeness explains this and several other ways in which human judgments break the laws of probability. [ 14 ]
The representativeness heuristic is also an explanation of how people judge cause and effect: when they make these judgements on the basis of similarity, they are also said to be using the representativeness heuristic. This can lead to a bias, incorrectly finding causal relationships between things that resemble one another and missing them when the cause and effect are very different. Examples of this include both the belief that "emotionally relevant events ought to have emotionally relevant causes", and magical associative thinking . [ 64 ] [ 65 ]
A 1973 experiment used a psychological profile of Tom W., a fictional graduate student. [ 66 ] One group of subjects had to rate Tom's similarity to a typical student in each of nine academic areas (including Law, Engineering and Library Science). Another group had to rate how likely it is that Tom specialised in each area. If these ratings of likelihood are governed by probability, then they should resemble the base rates , i.e. the proportion of students in each of the nine areas (which had been separately estimated by a third group). If people based their judgments on probability, they would say that Tom is more likely to study Humanities than Library Science, because there are many more Humanities students, and the additional information in the profile is vague and unreliable. Instead, the ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where subjects judged the likelihood of a fictional woman taking different careers. This suggests that rather than estimating probability using base rates, subjects had substituted the more accessible attribute of similarity. [ 66 ]
When people rely on representativeness, they can fall into an error which breaks a fundamental law of probability . [ 63 ] Tversky and Kahneman gave subjects a short character sketch of a woman called Linda, describing her as, "31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations". People reading this description then ranked the likelihood of different statements about Linda. Amongst others, these included "Linda is a bank teller", and, "Linda is a bank teller and is active in the feminist movement". People showed a strong tendency to rate the latter, more specific statement as more likely, even though a conjunction of the form "Linda is both X and Y " can never be more probable than the more general statement "Linda is X ". The explanation in terms of heuristics is that the judgment was distorted because, for the readers, the character sketch was representative of the sort of person who might be an active feminist but not of someone who works in a bank. A similar exercise concerned Bill, described as "intelligent but unimaginative". A great majority of people reading this character sketch rated "Bill is an accountant who plays jazz for a hobby", as more likely than "Bill plays jazz for a hobby". [ 67 ]
Without success, Tversky and Kahneman used what they described as "a series of increasingly desperate manipulations" to get their subjects to recognise the logical error. In one variation, subjects had to choose between a logical explanation of why "Linda is a bank teller" is more likely, and a deliberately illogical argument which said that "Linda is a feminist bank teller" is more likely "because she resembles an active feminist more than she resembles a bank teller". Sixty-five percent of subjects found the illogical argument more convincing. [ 67 ] [ 68 ] Other researchers also carried out variations of this study, exploring the possibility that people had misunderstood the question. They did not eliminate the error. [ 69 ] [ 70 ] It has been shown that individuals with high CRT scores are significantly less likely to be subject to the conjunction fallacy. [ 71 ] The error disappears when the question is posed in terms of frequencies. Everyone in these versions of the study recognised that out of 100 people fitting an outline description, the conjunction statement ("She is X and Y ") cannot apply to more people than the general statement ("She is X "). [ 72 ]
Tversky and Kahneman asked subjects to consider a problem about random variation. Imagining for simplicity that exactly half of the babies born in a hospital are male, the ratio will not be exactly half in every time period. On some days, more girls will be born and on others, more boys. The question was, does the likelihood of deviating from exactly half depend on whether there are many or few births per day? It is a well-established consequence of sampling theory that proportions will vary much more day-to-day when the typical number of births per day is small. However, people's answers to the problem do not reflect this fact. They typically reply that the number of births in the hospital makes no difference to the likelihood of more than 60% male babies in one day. The explanation in terms of the heuristic is that people consider only how representative the figure of 60% is of the previously given average of 50%. [ 14 ] [ 73 ]
Richard E. Nisbett and colleagues suggest that representativeness explains the dilution effect , in which irrelevant information weakens the effect of a stereotype . Subjects in one study were asked whether "Paul" or "Susan" was more likely to be assertive, given no other information than their first names. They rated Paul as more assertive, apparently basing their judgment on a gender stereotype. Another group, told that Paul's and Susan's mothers each commute to work in a bank, did not show this stereotype effect; they rated Paul and Susan as equally assertive. The explanation is that the additional information about Paul and Susan made them less representative of men or women in general, and so the subjects' expectations about men and women had a weaker effect. [ 74 ] This means unrelated and non-diagnostic information about certain issue can make relative information less powerful to the issue when people understand the phenomenon. [ 75 ]
Representativeness explains systematic errors that people make when judging the probability of random events. For example, in a sequence of coin tosses, each of which comes up heads (H) or tails (T), people reliably tend to judge a clearly patterned sequence such as HHHTTT as less likely than a less patterned sequence such as HTHTTH. These sequences have exactly the same probability, but people tend to see the more clearly patterned sequences as less representative of randomness, and so less likely to result from a random process. [ 14 ] [ 76 ] Tversky and Kahneman argued that this effect underlies the gambler's fallacy ; a tendency to expect outcomes to even out over the short run, like expecting a roulette wheel to come up black because the last several throws came up red. [ 63 ] [ 77 ] They emphasised that even experts in statistics were susceptible to this illusion: in a 1971 survey of professional psychologists, they found that respondents expected samples to be overly representative of the population they were drawn from. As a result, the psychologists systematically overestimated the statistical power of their tests, and underestimated the sample size needed for a meaningful test of their hypotheses. [ 14 ] [ 77 ]
Anchoring and adjustment is a heuristic used in many situations where people estimate a number. [ 78 ] According to Tversky and Kahneman's original description, it involves starting from a readily available number—the "anchor"—and shifting either up or down to reach an answer that seems plausible. [ 78 ] In Tversky and Kahneman's experiments, people did not shift far enough away from the anchor. Hence the anchor contaminates the estimate, even if it is clearly irrelevant. In one experiment, subjects watched a number being selected from a spinning "wheel of fortune". They had to say whether a given quantity was larger or smaller than that number. For instance, they might be asked, "Is the percentage of African countries which are members of the United Nations larger or smaller than 65%?" They then tried to guess the true percentage. Their answers correlated well with the arbitrary number they had been given. [ 78 ] [ 79 ] Insufficient adjustment from an anchor is not the only explanation for this effect. An alternative theory is that people form their estimates on evidence which is selectively brought to mind by the anchor. [ 80 ]
The anchoring effect has been demonstrated by a wide variety of experiments both in laboratories and in the real world. [ 79 ] [ 81 ] It remains when the subjects are offered money as an incentive to be accurate, or when they are explicitly told not to base their judgment on the anchor. [ 81 ] The effect is stronger when people have to make their judgments quickly. [ 82 ] Subjects in these experiments lack introspective awareness of the heuristic, denying that the anchor affected their estimates. [ 82 ]
Even when the anchor value is obviously random or extreme, it can still contaminate estimates. [ 81 ] One experiment asked subjects to estimate the year of Albert Einstein 's first visit to the United States. Anchors of 1215 and 1992 contaminated the answers just as much as more sensible anchor years. [ 82 ] Other experiments asked subjects if the average temperature in San Francisco is more or less than 558 degrees, or whether there had been more or fewer than 100,025 top ten albums by The Beatles . These deliberately absurd anchors still affected estimates of the true numbers. [ 79 ]
Anchoring results in a particularly strong bias when estimates are stated in the form of a confidence interval . An example is where people predict the value of a stock market index on a particular day by defining an upper and lower bound so that they are 98% confident the true value will fall in that range. A reliable finding is that people anchor their upper and lower bounds too close to their best estimate. [ 14 ] This leads to an overconfidence effect . One much-replicated finding is that when people are 98% certain that a number is in a particular range, they are wrong about thirty to forty percent of the time. [ 14 ] [ 83 ]
Anchoring also causes particular difficulty when many numbers are combined into a composite judgment. Tversky and Kahneman demonstrated this by asking a group of people to rapidly estimate the product 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1. Another group had to estimate the same product in reverse order; 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8. Both groups underestimated the answer by a wide margin, but the latter group's average estimate was significantly smaller. [ 84 ] The explanation in terms of anchoring is that people multiply the first few terms of each product and anchor on that figure. [ 84 ] A less abstract task is to estimate the probability that an aircraft will crash, given that there are numerous possible faults each with a likelihood of one in a million. A common finding from studies of these tasks is that people anchor on the small component probabilities and so underestimate the total. [ 84 ] A corresponding effect happens when people estimate the probability of multiple events happening in sequence, such as an accumulator bet in horse racing. For this kind of judgment, anchoring on the individual probabilities results in an overestimation of the combined probability. [ 84 ]
People's valuation of goods, and the quantities they buy, respond to anchoring effects. In one experiment, people wrote down the last two digits of their social security numbers . They were then asked to consider whether they would pay this number of dollars for items whose value they did not know, such as wine, chocolate, and computer equipment. They then entered an auction to bid for these items. Those with the highest two-digit numbers submitted bids that were many times higher than those with the lowest numbers. [ 85 ] [ 86 ] When a stack of soup cans in a supermarket was labelled, "Limit 12 per customer", the label influenced customers to buy more cans. [ 82 ] In another experiment, real estate agents appraised the value of houses on the basis of a tour and extensive documentation. Different agents were shown different listing prices, and these affected their valuations. For one house, the appraised value ranged from US$ 114,204 to $128,754. [ 87 ] [ 88 ]
Anchoring and adjustment has also been shown to affect grades given to students. In one experiment, 48 teachers were given bundles of student essays, each of which had to be graded and returned. They were also given a fictional list of the students' previous grades. The mean of these grades affected the grades that teachers awarded for the essay. [ 89 ]
One study showed that anchoring affected the sentences in a fictional rape trial. [ 90 ] The subjects were trial judges with, on average, more than fifteen years of experience. They read documents including witness testimony, expert statements, the relevant penal code, and the final pleas from the prosecution and defence. The two conditions of this experiment differed in just one respect: the prosecutor demanded a 34-month sentence in one condition and 12 months in the other; there was an eight-month difference between the average sentences handed out in these two conditions. [ 90 ] In a similar mock trial, the subjects took the role of jurors in a civil case. They were either asked to award damages "in the range from $15 million to $50 million" or "in the range from $50 million to $150 million". Although the facts of the case were the same each time, jurors given the higher range decided on an award that was about three times higher. This happened even though the subjects were explicitly warned not to treat the requests as evidence. [ 85 ]
Assessments can also be influenced by the stimuli provided. In one review, researchers found that if a stimulus is perceived to be important or carry "weight" to a situation, that people were more likely to attribute that stimulus as heavier physically. [ 91 ]
" Affect ", in this context, is a feeling such as fear, pleasure or surprise. It is shorter in duration than a mood , occurring rapidly and involuntarily in response to a stimulus . While reading the words "lung cancer" might generate an affect of dread , the words "mother's love" can create an affect of affection and comfort. When people use affect ("gut responses") to judge benefits or risks, they are using the affect heuristic. [ 92 ] The affect heuristic has been used to explain why messages framed to activate emotions are more persuasive than those framed in a purely factual way. [ 93 ]
Decision makers, whether at an organisational or national level, can come across the dilemma of whether to continue with an operation or withdraw from it. The escalation of commitment heuristic demonstrates that people often tend to lock themselves into losing courses of action in the hopes that investing more resources into an operation will turn around losses. [ 94 ] [ 95 ] Furthermore, escalation of commitment can be expected to occur in situations where the decision maker can claim credit for operational success, but losses and operational failure are directed and absorbed by others such as a larger entity. [ 96 ] Cognitive determinates that can influence escalation of commitment include self-justification, problem framing, sunk costs, goal substitution, self-efficacy, accountability, and illusion of control. [ 94 ] The general flow of events that causes implementation of the escalation of commitment heuristic are as follows:
Aside from being relevant to decision makers in firms and organisations, escalation of commitment is also applicable to decisions made by national leaders. An example of this is decisions relating to further investment in wars. In a war-based scenario, the costs are predominately borne by soldiers and taxpayers. Additionally, decision makers in war scenarios often do not have to directly or immediately bear the costs of their decisions at the same level as soldiers and taxpayers do, hence making their decision to keep investing easier. This reflects the escalation of commitment heuristic, and inevitably creates a cyclical process of reinvestment that has the potential to cause long-term issues economically, socially, and politically at both local and global scales. [ 96 ]
There are competing theories of human judgment, which differ on whether the use of heuristics is irrational. A cognitive laziness approach argues that heuristics are inevitable shortcuts given the limitations of the human brain. According to the natural assessments approach, some complex calculations are already done rapidly and automatically by the brain, and other judgments make use of these processes rather than calculating from scratch. This has led to a theory called "attribute substitution", which says that people often handle a complicated question by answering a different, related question, without being aware that this is what they are doing. [ 97 ] A third approach argues that heuristics perform just as well as more complicated decision-making procedures, but more quickly and with less information. This perspective emphasises the "fast and frugal" nature of heuristics. [ 98 ]
An effort-reduction framework proposed by Anuj K. Shah and Daniel M. Oppenheimer states that people use a variety of techniques to reduce the effort of making decisions. [ 99 ]
In 2002 Daniel Kahneman and Shane Frederick proposed a process called attribute substitution which happens without conscious awareness. According to this theory, when somebody makes a judgment (of a target attribute ) which is computationally complex, a rather more easily calculated heuristic attribute is substituted. [ 100 ] In effect, a difficult problem is dealt with by answering a rather simpler problem, without the person being aware this is happening. [ 97 ] This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean . [ 97 ] [ 100 ] [ 101 ]
This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. [ 97 ] [ 100 ]
In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g. the brightness of a light, the severity of a crime) is encoded by brain cells in a way that is independent of modality . Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be very different in nature. [ 97 ]
[P]eople are not accustomed to thinking hard, and are often content to trust a plausible judgment that comes to mind.
Kahneman and Frederick propose three conditions for attribute substitution: [ 97 ]
Kahneman gives an example where some Americans were offered insurance against their own death in a terrorist attack while on a trip to Europe, while another group were offered insurance that would cover death of any kind on the trip. Even though "death of any kind" includes "death in a terrorist attack", the former group were willing to pay more than the latter. Kahneman suggests that the attribute of fear is being substituted for a calculation of the total risks of travel. [ 102 ] Fear of terrorism for these subjects was stronger than a general fear of dying on a foreign trip.
Gerd Gigerenzer and colleagues have argued that heuristics can be used to make judgments that are accurate rather than biased. According to them, heuristics are "fast and frugal" alternatives to more complicated procedures, giving answers that are just as good. [ 103 ]
Warren Thorngate, a social psychologist, implemented ten simple decision rules or heuristics in a computer program. He determined how often each heuristic selected alternatives with highest-through-lowest expected value in a series of randomly-generated decision situations. He found that most of the simulated heuristics selected alternatives with highest expected value and almost never selected alternatives with lowest expected value. [ 104 ]
Psychologist Benoît Monin reports a series of experiments in which subjects, looking at photographs of faces, have to judge whether they have seen those faces before. It is repeatedly found that attractive faces are more likely to be mistakenly labeled as familiar. [ 105 ] Monin interprets this result in terms of attribute substitution. The heuristic attribute in this case is a "warm glow"; a positive feeling towards someone that might either be due to their being familiar or being attractive. This interpretation has been criticised, because not all the variance in familiarity is accounted for by the attractiveness of the photograph. [ 99 ]
Legal scholar Cass Sunstein has argued that attribute substitution is pervasive when people reason about moral , political or legal matters. [ 106 ] Given a difficult, novel problem in these areas, people search for a more familiar, related problem (a "prototypical case") and apply its solution as the solution to the harder problem. According to Sunstein, the opinions of trusted political or religious authorities can serve as heuristic attributes when people are asked their own opinions on a matter. Another source of heuristic attributes is emotion : people's moral opinions on sensitive subjects like sexuality and human cloning may be driven by reactions such as disgust , rather than by reasoned principles. [ 107 ] Sunstein has been challenged as not providing enough evidence that attribute substitution, rather than other processes, is at work in these cases. [ 99 ]
An example of how persuasion plays a role in heuristic processing can be explained through the heuristic-systematic model. [ 108 ] This explains how there are often two ways we are able to process information from persuasive messages, one being heuristically and the other systematically. A heuristic is when we make a quick short judgement into our decision making. On the other hand, systematic processing involves more analytical and inquisitive cognitive thinking. Individuals looks further than their own prior knowledge for the answers. [ 109 ] [ 110 ] An example of this model could be used when watching an advertisement about a specific medication. One without prior knowledge would see the person in the proper pharmaceutical attire and assume that they know what they are talking about. Therefore, that person automatically has more credibility and is more likely to trust the content of the messages than they deliver. While another who is also in that field of work or already has prior knowledge of the medication will not be persuaded by the ad because of their systematic way of thinking. This was also formally demonstrated in an experiment conducted my Chaiken and Maheswaran (1994). [ 111 ] In addition to these examples, the fluency heuristic ties in perfectly with the topic of persuasion. It is described as how we all easily make "the most of an automatic by-product of retrieval from memory". [ 112 ] An example would be a friend asking about good books to read. [ 113 ] Many could come to mind, but you name the first book recalled from your memory. Since it was the first thought, therefore you value it as better than any other book one could suggest. The effort heuristic is almost identical to fluency. The one distinction would be that objects that take longer to produce are seen with more value. One may conclude that a glass vase is more valuable than a drawing, merely because it may take longer for the vase. These two varieties of heuristics confirms how we may be influenced easily our mental shortcuts, or what may come quickest to our mind. [ 114 ] | https://en.wikipedia.org/wiki/Heuristic_(psychology) |
A heuristic argument is an argument that reasons from the value of a method or principle that has been shown experimentally (especially through trial-and-error ) to be useful or convincing in learning, discovery and problem-solving , but whose line of reasoning involves key oversimplifications that make it not entirely rigorous. [ 1 ] A widely used and important example of a heuristic argument is Occam's Razor .
It is a speculative, non-rigorous argument that relies on analogy or intuition, and that allows one to achieve a result or an approximation that is to be checked later with more rigor. Otherwise, the results are generally to be doubted. It is used as a hypothesis or a conjecture in an investigation, though it can also be used as a mnemonic as well. [ 2 ]
This philosophy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heuristic_argument |
Heuristic routing is a system used to describe how deliveries are made when problems in a network topology arise. Heuristic is an adjective used in relation to methods of learning, discovery, or problem solving. Routing is the process of selecting paths to specific destinations. Heuristic routing is used for traffic in the telecommunications networks and transport networks of the world.
Heuristic routing is achieved using specific algorithms to determine a better, although not always optimal, path to a destination. When an interruption in a network topology occurs, the software running on the networking electronics can calculate another route to the desired destination via an alternate available path.
According to Shuster & Schur (1974 , p. 1):
The heuristic approach to problem solving consists of applying human intelligence, experience, common sense and certain rules of thumb (or heuristics) to develop an acceptable, but not necessarily an optimum, solution to a problem. Of course, determining what constitutes an acceptable solution is part of the task of deciding which approach to use; but broadly defined, an acceptable solution is one that is both reasonably good (close to optimum) and derived within reasonable effort, time, and cost constraints. Often the effort (manpower, computer, and other resources) required, the time limits on when the solution is needed, and the cost to compile, process, and analyze all the data required for deterministic or other complicated procedures preclude their usefulness or favor the faster, simpler heuristic approach. Thus, the heuristic approach is generally used when deterministic techniques or are not available, economical, or practical.
Heuristic routing allows a measure of route optimization in telecommunications networks based on recent empirical knowledge of the state of the network. Data, such as time delay , may be extracted from incoming messages, during specified periods and over different routes, and used to determine the optimum routing for transmitting data back to the sources.
The IP routing protocols in use today are based on one of two algorithms: distance vector or link state . Distance vector algorithms broadcast routing information to all neighboring routers. Link state routing protocols build a topographical map of the entire network based on updates from neighbor routers, and then use the Dijkstra algorithm to compute the shortest path to each destination. Metrics used are based on the number of hops, delay, throughput, traffic, and reliability.
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. | https://en.wikipedia.org/wiki/Heuristic_routing |
Heusler compounds are magnetic intermetallics with face-centered cubic crystal structure and a composition of XYZ (half-Heuslers) or X 2 YZ (full-Heuslers), where X and Y are transition metals and Z is in the p-block . The term derives from the name of German mining engineer and chemist Friedrich Heusler , who studied such a compound (Cu 2 MnAl) in 1903. [ 1 ] Many of these compounds exhibit properties relevant to spintronics , such as magnetoresistance , variations of the Hall effect , ferro- , antiferro- , and ferrimagnetism , half- and semimetallicity , semiconductivity with spin filter ability, superconductivity , topological band structure and are actively studied as thermoelectric materials . Their magnetism results from a double-exchange mechanism between neighboring magnetic ions. Manganese , which sits at the body centers of the cubic structure, was the magnetic ion in the first Heusler compound discovered. (See the Bethe–Slater curve for details of why this happens.)
Depending on the field of literature being surveyed, one might encounter the same compound referred to with different chemical formulas. An example of the most common difference is X 2 YZ versus XY 2 Z, where the labels of the two transition metals X and Y in the compound are swapped. The traditional convention X 2 YZ [ 2 ] arises from the interpretation of Heuslers as intermetallics and is used predominantly in literature studying magnetic applications of Heuslers compounds. The XY 2 Z convention on the other hand is used mostly in thermoelectric materials [ 3 ] and transparent conducting applications [ 4 ] literature where semiconducting Heuslers (most half-Heuslers are semiconductors) are used. This convention, in which the left-most element on the periodic table comes first, uses the Zintl interpretation [ 5 ] of semiconducting compounds where the chemical formula XY 2 Z is written in order of increasing electronegativity. In well-known compounds such as Fe 2 VAl which were historically thought of as metallic (semi-metallic) but were more recently shown to be small-gap semiconductors [ 6 ] one might find both styles being used. In the present article semiconducting compounds might sometimes be mentioned in the XY 2 Z style.
Although traditionally thought to form at compositions XYZ and X 2 YZ, studies published after 2015 have discovered and reliably predicted Heusler compounds with atypical compositions such as XY 0.8 Z and X 1.5 YZ. [ 8 ] [ 9 ] Besides these ternary compositions, quaternary Heusler compositions called the double Half-Heusler X 2 YY'Z 2 [ 10 ] (e.g. Ti 2 FeNiSb 2 ) and triple Half-Heusler X 2 X'Y 3 Z 3 [ 7 ] (for e.g. Mg 2 VNi 3 Sb 3 ) have also been discovered. These "off-stoichiometric" (that is, differing from the well-known XYZ and X 2 YZ compositions) Heuslers are mostly semiconductors in the low temperature T = 0 K limit. [ 11 ] The stable compositions and corresponding electrical properties for these compounds can be quite sensitive to temperature [ 12 ] and their order-disorder transition temperatures often occur below room-temperatures. [ 10 ] Large amounts of defects at the atomic scale in off-stoichiometric Heuslers helps them achieve very low thermal conductivities and make them favorable for thermoelectric applications. [ 13 ] [ 14 ] The X 1.5 YZ semiconducting composition is stabilized by the transition metal X playing a dual role (electron donor as well as acceptor) in the structure. [ 15 ]
The half-Heusler compounds have distinctive properties and high tunability which makes the class very promising as thermoelectric materials. A study has predicted that there can be as many as 481 stable half-Heusler compounds using high-throughput ab initio calculation combine with machine learning techniques. [ 16 ] The particular half-Heusler compounds of interest as thermoelectric materials (space group) are the semiconducting ternary compounds with a general formula XYZ where X is a more electropositive transition metal (such as Ti or Zr), Y is a less electropositive transition metal (such Ni or Co), and Z is heavy main group element (such as Sn or Sb). [ 17 ] [ 18 ] This flexible range of element selection allows many different combinations to form a half-Heusler phase and enables a diverse range of material properties.
Half-Heusler thermoelectric materials have distinct advantages over many other thermoelectric materials; low toxicity, inexpensive element, robust mechanical properties, and high thermal stability make half-Heusler thermoelectrics an excellent option for mid-high temperature application. [ 17 ] [ 19 ] However, the high thermal conductivity, which is intrinsic to highly symmetric HH structure, has made HH thermoelectric generally less efficient than other classes of TE materials. Many studies have focused on improving HH thermoelectric by reducing the lattice thermal conductivity and zT > 1 has been repeatedly recorded. [ 19 ]
The magnetism of the early full-Heusler compound Cu 2 MnAl varies considerably with heat treatment and composition. [ 21 ] It has a room-temperature saturation induction of around 8,000 gauss, which exceeds that of the element nickel (around 6100 gauss) but is smaller than that of iron (around 21500 gauss). For early studies see. [ 1 ] [ 22 ] [ 23 ] In 1934, Bradley and Rogers showed that the room-temperature ferromagnetic phase was a fully ordered structure of the L2 1 Strukturbericht type . [ 24 ] This has a primitive cubic lattice of copper atoms with alternate cells body-centered by manganese and aluminium . The lattice parameter is 5.95 Å . The molten alloy has a solidus temperature of about 910 °C. As it is cooled below this temperature, it transforms into disordered, solid, body-centered cubic beta-phase. Below 750 °C, a B2 ordered lattice forms with a primitive cubic copper lattice, which is body-centered by a disordered manganese-aluminium sublattice. [ 21 ] [ 25 ] Cooling below 610 °C causes further ordering of the manganese and aluminium sub-lattice to the L2 1 form. [ 21 ] [ 26 ] In non-stoichiometric alloys, the temperatures of ordering decrease, and the range of anealing temperatures, where the alloy does not form microprecipitates, becomes smaller than for the stoichiometric material. [ 27 ] [ 28 ] [ 21 ]
Oxley found a value of 357 °C for the Curie temperature , below which the compound becomes ferromagnetic. [ 29 ] Neutron diffraction and other techniques have shown that a magnetic moment of around 3.7 Bohr magnetons resides almost solely on the manganese atoms. [ 21 ] [ 30 ] As these atoms are 4.2 Å apart, the exchange interaction, which aligns the spins, is likely indirect and is mediated through conduction electrons or the aluminium and copper atoms. [ 29 ] [ 31 ]
Electron microscopy studies demonstrated that thermal antiphase boundaries (APBs) form during cooling through the ordering temperatures, as ordered domains nucleate at different centers within the crystal lattice and are often out of step with each other where they meet. [ 21 ] [ 25 ] The anti-phase domains grow as the alloy is annealed. There are two types of APBs corresponding to the B2 and L2 1 types of ordering. APBs also form between dislocations if the alloy is deformed. At the APB the manganese atoms will be closer than in the bulk of the alloy and, for non-stoichiometric alloys with an excess of copper (e.g. Cu 2.2 MnAl 0.8 ), an antiferromagnetic layer forms on every thermal APB. [ 32 ] These antiferromagnetic layers completely supersede the normal magnetic domain structure and stay with the APBs if they are grown by annealing the alloy. This significantly modifies the magnetic properties of the non-stoichiometric alloy relative to the stoichiometric alloy which has a normal domain structure. Presumably this phenomenon is related to the fact that pure manganese is an antiferromagnet although it is not clear why the effect is not observed in the stoichiometric alloy. Similar effects occur at APBs in the ferromagnetic alloy MnAl at its stoichiometric composition. [ citation needed ]
Some Heusler compounds also exhibit properties of materials known as ferromagnetic shape-memory alloys . These are generally composed of nickel, manganese and gallium and can change their length by up to 10% in a magnetic field. [ 33 ]
Understanding the mechanical properties of Heusler compounds is paramount for temperature-sensitive applications (e.g. thermoelectrics ) for which some sub-classes of Heusler compounds are used. However, experimental studies are rarely encountered in literature. [ 34 ] In fact, the commercialization of these compounds is limited by the material's ability to undergo intense, repetitive thermal cycling and resist cracking from vibrations. An appropriate measure for crack resistance is the material's toughness , which typically scales inversely with another important mechanical property: the mechanical strength . In this section, we highlight existing experimental and computational studies on the mechanical properties of Heusler alloys. Note that the mechanical properties of such a compositionally-diverse class of materials is expectedly dependent on the chemical composition of the alloys themselves, and therefore trends in mechanical properties are difficult to identify without a case-by-case study.
The elastic modulus values of half-Heusler alloys range from 83 to 207 GPa, whereas the bulk modulus spans a tighter range from 100 GPa in HfNiSn to 130 GPa in TiCoSb. [ 34 ] A collection of various density functional theory (DFT) calculations show that half-Heusler compounds are predicted to have a lower elastic, shear , and bulk modulus than in quaternary-, full-, and inverse-Hausler alloys. [ 34 ] DFT also predicts a decrease in elastic modulus with temperature in Ni 2 XAl (X=Sc, Ti, V), as well as an increase in stiffness with pressure. [ 35 ] The decrease in modulus with respect to temperature is also observed in TiNiSn, ZrNiSn, and HfNiSn, where ZrNiSn has the highest modulus and Hf has the lowest. [ 36 ] This phenomenon can be explained by the fact that the elastic modulus decreases with increasing interatomic separation : as temperature increases, the atomic vibrations also increase, resulting in a larger equilibrium interatomic separation.
The mechanical strength is also rarely studied in Heusler compounds. One study has shown that, in off-stoichiometric Ni 2 MnIn, the material reaches a peak strength of 475 MPa at 773 K, which drastically reduces to below 200 MPa at 973 K. [ 37 ] In another study, a polycrystalline Heusler alloy composed of the Ni-Mn-Sn ternary composition space was found to possess a peak compressive strength of about 2000 MPa with plastic deformation up to 5%. [ 38 ] However, the addition of Indium to the Ni-Mn-Sn ternary alloy not only increases the porosity of the samples, but it also reduces the compressive strength to 500 MPa. It is unclear from the study what percentage of the porosity increase from the indium addition reduces the strength. Note that this is opposite to the outcome expected from solid solution strengthening , where adding indium to the ternary system slows dislocation movement through dislocation-solute interaction and subsequently increases the material's strength.
The fracture toughness can also be tuned with composition modifications. For example, the average toughness of Ti 1−x (Zr, Hf) x NiSn ranges from 1.86 MPa m 1/2 to 2.16 MPa m 1/2 , increasing with Zr/Hf content. [ 36 ] The preparation of samples may affect the measured fracture toughness however, as elaborated by O’Connor et al. [ 39 ] In their study, samples of Ti 0.5 Hf 0.5 Co 0.5 Ir 0.5 Sb 1−x Sn x were prepared using three different methods: a high-temperature solid state reaction , high-energy ball milling , and a combination of both. The study found higher fracture toughness in samples prepared without a high-energy ball milling step of 2.7 MPa m 1/2 to 4.1 MPa m 1/2 , as opposed to samples that were prepared with ball milling of 2.2 MPa m 1/2 to 3.0 MPa m 1/2 . [ 36 ] [ 39 ] Fracture toughness is sensitive to inclusions and existing cracks in the material, so it is as expected dependent on the sample preparation.
Half-metallic ferromagnets exhibit a metallic behavior in one spin channel and an insulating behavior in the other spin channel. The first example of Heusler half-metallic ferromagnets was first investigated by de Groot et al., [ 40 ] with the case of NiMnSb. Half-metallicity leads to the full polarization of the conducting electrons. Half metallic ferromagnets are therefore promising for spintronics applications. [ 41 ] | https://en.wikipedia.org/wiki/Heusler_compound |
Heweliusz (also called BRITE-PL ) is the second [ 1 ] Polish scientific satellite launched in 2014 as part of the Bright-star Target Explorer (BRITE) programme. The spacecraft was launched aboard a Chang Zheng 4B rocket in August 2014. Heweliusz is an optical astronomy spacecraft built by the Space Research Centre of the Polish Academy of Sciences and operated by Centrum Astronomiczne im. Mikołaja Kopernika PAN; it is one of two Polish contributions to the BRITE constellation along with the Lem satellite. It is named after Johannes Hevelius .
Heweliusz is the third [ 2 ] Polish satellite (after PW-Sat and Lem ) ever launched. Along with Lem, TUGSAT-1 , UniBRITE-1 and BRITE-Toronto , it is one from a constellation of six nanosatellites of the BRIght-star Target Explorer project, operated by a consortium of universities from Canada, Austria and Poland. [ 3 ]
Heweliusz was developed and manufactured by the Space Research Centre of the Polish Academy of Sciences between 2010 and 2012, based around the Generic Nanosatellite Bus , and had a mass at launch of 7 kilograms (15 lb). [ 4 ] The satellite is used, along with four other operating spacecraft, [ a ] to conduct photometric observations of stars with an apparent magnitude of greater than 4.0 as seen from Earth. [ 6 ] Heweliusz was one of two Polish BRITE satellites launched, along with the Lem spacecraft. Four more satellites—two Austrian and two Canadian—were launched at different dates.
Heweliusz observes the stars in the red color range whereas Lem does it in blue . Due to the multicolour option, geometrical and thermal effects in the analysis of the observed phenomena are separated. None of the much larger satellites, such as MOST and CoRoT , has this colour option; this is crucial in the diagnosis of the internal structure of stars. [ 7 ] Heweliusz photometrically measures low-level oscillations and temperature variations in stars brighter than visual magnitude (4.0), with unprecedented precision and temporal coverage not achievable through terrestrial based methods. [ 4 ]
The Heweliusz satellite was launched as a secondary payload on a Long March 4B rocket, whose primary payload was the Chinese Gaofen 2 earth-observation satellite. The launch was subcontracted to the China Great Wall Industry Corporation and China Aerospace Science and Technology Corporation (CASC). [ 8 ] The launch took place at 03:15 UTC on 19 August 2014 from the Taiyuan Satellite Launch Center , and the rocket deployed all of its payloads successfully. [ 9 ]
Although the other satellites in the BRITE constellation used the Canadian XPOD nanosatellite deployer, Heweliusz uses an indigenous Polish system. The DRAGON nanosatellite deployer was designed specifically for this mission by the Space Research Centre, in collaboration with the SRC spinoff company Astronika. Development, manufacturing, testing, and integration of the system took only two months. [ 10 ] | https://en.wikipedia.org/wiki/Heweliusz_(satellite) |
The Hewitt–Savage zero–one law is a theorem in probability theory , similar to Kolmogorov's zero–one law and the Borel–Cantelli lemma , that specifies that a certain type of event will either almost surely happen or almost surely not happen. It is sometimes known as the Savage-Hewitt law for symmetric events . It is named after Edwin Hewitt and Leonard Jimmie Savage . [ 1 ]
Let { X n } n = 1 ∞ {\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }} be a sequence of independent and identically-distributed random variables taking values in a set X {\displaystyle \mathbb {X} } . The Hewitt-Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finite permutations of the indices, has probability either 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed).
Somewhat more abstractly, define the exchangeable sigma algebra or sigma algebra of symmetric events E {\displaystyle {\mathcal {E}}} to be the set of events (depending on the sequence of variables { X n } n = 1 ∞ {\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }} ) which are invariant under finite permutations of the indices in the sequence { X n } n = 1 ∞ {\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }} . Then A ∈ E ⟹ P ( A ) ∈ { 0 , 1 } {\displaystyle A\in {\mathcal {E}}\implies \mathbb {P} (A)\in \{0,1\}} .
Since any finite permutation can be written as a product of transpositions , if we wish to check whether or not an event A {\displaystyle A} is symmetric (lies in E {\displaystyle {\mathcal {E}}} ), it is enough to check if its occurrence is unchanged by an arbitrary transposition ( i , j ) {\displaystyle (i,j)} , i , j ∈ N {\displaystyle i,j\in \mathbb {N} } .
Let the sequence { X n } n = 1 ∞ {\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }} of independent and identically distributed random variables take values in [ 0 , ∞ ) {\displaystyle [0,\infty )} . Then the event that the series ∑ n = 1 ∞ X n {\displaystyle \sum _{n=1}^{\infty }X_{n}} converges (to a finite value) is a symmetric event in E {\displaystyle {\mathcal {E}}} , since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the common expected value E [ X n ] > 0 {\displaystyle \mathbb {E} [X_{n}]>0} (which essentially means that P ( X n = 0 ) < 1 {\displaystyle \mathbb {P} (X_{n}=0)<1} because of the random variables' non-negativity), we may conclude that
i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one.
Continuing with the previous example, define
which is the position at step N of a random walk with the iid increments X n . The event { S N = 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence ( S N ), but S N are not independent and therefore the Kolmogorov's zero–one law is not directly applicable here. [ 2 ] | https://en.wikipedia.org/wiki/Hewitt–Savage_zero–one_law |
Hex (also called Nash ) is a two player abstract strategy board game in which players attempt to connect opposite sides of a rhombus-shaped board made of hexagonal cells . Hex was invented by mathematician and poet Piet Hein in 1942 and later rediscovered and popularized by John Nash .
It is traditionally played on an 11×11 rhombus board, although 13×13 and 19×19 boards are also popular. The board is composed of hexagons called cells or hexes . Each player is assigned a pair of opposite sides of the board, which they must try to connect by alternately placing a stone of their color onto any empty hex. Once placed, the stones are never moved or removed. A player wins when they successfully connect their sides together through a chain of adjacent stones. Draws are impossible in Hex due to the topology of the game board.
Despite the simplicity of its rules, the game has deep strategy and sharp tactics. It also has profound mathematical underpinnings related to the Brouwer fixed-point theorem , matroids and graph connectivity . The game was first published under the name Polygon in the Danish newspaper Politiken on December 26, 1942. It was later marketed as a board game in Denmark under the name Con-tac-tix , and Parker Brothers marketed a version of it in 1952 called Hex ; they are no longer in production. Hex can also be played with paper and pencil on hexagonally ruled graph paper.
Hex is a finite, 2-player perfect information game, and an abstract strategy game that belongs to the general category of connection games . [ 1 ] It can be classified as a Maker-Breaker game , [ 1 ] : 122 a particular type of positional game . Since the game can never end in a draw , [ 1 ] : 99 Hex is also a determined game .
Hex is a special case of the "node" version of the Shannon switching game . [ 1 ] : 122 Hex can be played as a board game or as a paper-and-pencil game .
Hex is played on a rhombic grid of hexagons, typically of size 11×11, although other sizes are also possible. Each player has an allocated color, conventionally red and blue, or black and white. [ 2 ] Each player is also assigned two opposite board edges. The hexagons on each of the four corners belong to both adjacent board edges.
The players take turns placing a stone of their color on a single cell on the board. The most common convention is for Red or Black to go first. Once placed, stones are not moved, replaced, or removed from the board. Each player's goal is to form a connected path of their own stones linking their two board edges. The player who complete such a connection wins the game.
To compensate for the first player's advantage, the swap rule (also called the pie rule) is normally used. This rule allows the second player to choose whether to switch positions with the first player after the first player makes the first move.
When it is clear to both players who will win the game, it is customary, but not required, for the losing player to resign. In practice, most games of Hex end with one of the players resigning.
The game was invented by the Danish mathematician Piet Hein , who introduced it in 1942 at the Niels Bohr Institute . Although Hein later renamed it to Con-tac-tix, [ 3 ] [ 4 ] it became known in Denmark under the name Polygon due to an article by Hein in the 26 December 1942 edition of the Danish newspaper Politiken , the first published description of the game, in which he used that name.
The game was rediscovered in 1948 or 1949 by the mathematician John Nash at Princeton University . [ 2 ] [ 5 ] According to Martin Gardner , who featured Hex in his July 1957 Mathematical Games column , Nash's fellow players called the game either Nash or John, with the latter name referring to the fact that the game could be played on hexagonal bathroom tiles. [ 2 ] Nash insisted that he discovered the game independently of Hein, but there is some doubt about this, as it is known that there were Danish people, including Aage Bohr , who played Hex at Princeton in the 1940s, so that Nash may have subconsciously picked up the idea. Hein wrote to Gardner in 1957 expressing doubt that Nash discovered Hex independently. Gardner was unable to independently verify or refute Nash's claim. [ 6 ] Gardner privately wrote to Hein: "I discussed it with the editor, and we decided that the charitable thing to do was to give Nash the benefit of the doubt. ... The fact that you invented the game before anyone else is undisputed. Any number of people can come along later and say that they thought of the same thing at some later date, but this means little and nobody really cares." [ 1 ] : 134 In a later letter to Hein, Gardner also wrote: "Just between you and me, and off the record, I think you hit the nail on the head when you referred to a 'flash of a suggestion' which came to Mr. Nash from a Danish source, and which he later forgot about. It seems the most likely explanation." [ 1 ] : 136
Initially in 1942, Hein distributed the game, which was then called Polygon, in the form of 50-sheet game pads. Each sheet contained an 11×11 empty board that could be played on with pencils or pens. [ 1 ] In 1952, Parker Brothers marketed a version of the game under the name "Hex", and the name stuck. [ 2 ] Parker Brothers also sold a version under the "Con-tac-tix" name in 1968. [ 3 ] Hex was also issued as one of the games in the 1974 3M Paper Games Series; the game contained a 5 + 1 ⁄ 2 -by- 8 + 1 ⁄ 2 -inch (140 mm × 220 mm) 50-sheet pad of ruled Hex grids. Hex is currently published by Nestorgames in a 11×11 size, a 14×14, and a 19×19 size. [ 7 ]
About 1950, Claude Shannon and E. F. Moore constructed an analog Hex playing machine, which was essentially a resistance network with resistors for edges and light bulbs for vertices. [ 8 ] The move to be made corresponded to a certain specified saddle point in the network. The machine played a reasonably good game of Hex. Later, researchers attempting to solve the game and develop Hex-playing computer algorithms emulated Shannon's network to create strong computer players. [ 9 ]
It was known to Hein in 1942 that Hex cannot end in a draw; in fact, one of his design criteria for the game was that "exactly one of the two players can connect their two sides". [ 1 ] : 29
It was also known to Hein that the first player has a theoretical winning strategy. [ 1 ] : 42
In 1952, John Nash wrote up an existence proof that on symmetrical boards, the first player has a winning strategy. [ 1 ] : 97
In 1964, the mathematician Alfred Lehman showed that Hex cannot be represented as a binary matroid , so a determinate winning strategy like that for the Shannon switching game on a regular rectangular grid was unavailable. [ 10 ]
In 1981, the Stefan Reisch showed that Hex is PSPACE-complete. [ 11 ]
In 2002, the first explicit winning strategy (a reduction-type strategy) on a 7×7 board was described.
In the 2000s, by using brute force search computer algorithms, Hex boards up to size 9×9 (as of 2016) have been completely solved.
Starting about 2006, the field of computer Hex came to be dominated by Monte Carlo tree search methods borrowed from successful computer implementations of Go. These replaced earlier implementations that combined Shannon's Hex-playing heuristic with alpha-beta search . On the subject of early Computer Hex, notable early implementations include Dolphin Microware's Hexmaster , published in the early 1980s for Atari 8-bit computers. [ 12 ]
Until 2019, humans remained better than computers at least on big boards such as 19x19, but on Oct 30, 2019 the program Mootwo won against the human player with the best Elo rank on LittleGolem, also winner of various tournaments (the game is available here ). This program was based on Polygames [ 13 ] (an open-source project, initially developed by Facebook Artificial Intelligence Research and several universities [ 14 ] ) using a mix of: [ 15 ]
From the proof of a winning strategy for the first player, it is known that the Hex board must have a complex type of connectivity which has never been solved. Play consists of creating small patterns which have a simpler type of connectivity called "safely connected", and joining them into sequences that form a "path". Eventually, one of the players will succeed in forming a safely connected path of stones and spaces between their sides of the board and win. The final stage of the game, if necessary, consists of filling in the empty spaces in the path. [ 17 ]
A "safely connected" pattern is composed of stones of the player's color and open spaces which can be joined into a chain, an unbroken sequence of edge-wise adjacent stones, no matter how the opponent plays. [ 18 ] One of the simplest such patterns is the bridge, which consists of a diamond of two stones of the same color and two empty spaces, where the two stones do not touch. [ 19 ] If the opponent plays in either space, the player plays in the other, creating a contiguous chain. There are also safely connected patterns which connect stones to edges. [ 20 ] There are many more safely connected patterns, some quite complex, built up of simpler ones like those shown. Patterns and paths can be disrupted by the opponent before they are complete, so the configuration of the board during an actual game often looks like a patchwork rather than something planned or designed. [ 17 ]
There are weaker types of connectivity than "safely connected" which exist between stones or between safely connected patterns which have multiple spaces between them. [ 21 ] The middle part of the game consists of creating a network of such weakly connected stones and patterns [ 21 ] which hopefully will allow the player, by filling in the weak links, to construct just one safely connected path between sides as the game progresses. [ 21 ]
Success at Hex requires a particular ability to visualize synthesis of complex patterns in a heuristic way, and estimating whether such patterns are 'strongly enough' connected to enable an eventual win. [ 17 ] The skill is somewhat similar to the visualization of patterns, sequencing of moves, and evaluating of positions in chess. [ 22 ]
It is not difficult to convince oneself by exposition, that hex cannot end in a draw, referred to as the "hex theorem". I.e., no matter how the board is filled with stones, there will always be one and only one player who has connected their edges. This fact was known to Piet Hein in 1942, who mentioned it as one of his design criteria for Hex in the original Politiken article. [ 1 ] : 29 Hein also stated this fact as "a barrier for your opponent is a
connection for you". [ 1 ] : 35 John Nash wrote up a proof of this fact around 1949, [ 23 ] but apparently did not publish the proof. Its first exposition appears in an in-house technical report in 1952, [ 24 ] in which Nash states that "connection and blocking the opponent are equivalent acts". A more rigorous proof was published by John R. Pierce in his 1961 book Symbols, Signals, and Noise . [ 25 ] In 1979, David Gale published a proof that the determinacy of Hex is equivalent to the two-dimensional Brouwer fixed-point theorem , and that the determinacy of higher-dimensional n -player variants proves the fixed-point theorem in general. [ 26 ]
An informal proof of the no-draw property of Hex can be sketched as follows: consider the connected component of one of the red edges. This component either includes the opposite red edge, in which case Red has a connection, or else it does not, in which case the blue stones along the boundary of the connected component form a winning path for Blue. The concept of a connected component is well-defined because in a hexagonal grid, two cells can only meet in an edge or not at all; it is not possible for cells to overlap in a single point.
In Hex without the swap rule on any board of size n x n , the first player has a theoretical winning strategy. This fact was mentioned by Hein in his notes for a lecture he gave in 1943: "in contrast to most other games, it can be proved that the first player in theory always can win, that is, if she could see to the end of all possible lines of play". [ 1 ] : 42
All known proofs of this fact are non-constructive, i.e., the proof gives no indication of what the actual winning strategy is. Here is a condensed version of a proof that is attributed to John Nash c. 1949. [ 2 ] The proof works for a number of games including Hex, and has come to be called the strategy-stealing argument .
In 1976, Shimon Even and Robert Tarjan proved that determining whether a position in a game of generalized Hex played on arbitrary graphs is a winning position is PSPACE-complete . [ 27 ] A strengthening of this result was proved by Reisch by reducing the quantified Boolean formula problem in conjunctive normal form to Hex. [ 28 ] This result means that there is no efficient (polynomial time in board size) algorithm to solve an arbitrary Hex position unless there is an efficient algorithm for all PSPACE problems, which is widely believed not to be the case. [ 29 ] However, it doesn't rule out the possibility of a simple winning strategy for the initial position (on boards of arbitrary size), or a simple winning strategy for all positions on a board of a particular size.
In 11×11 Hex, the state space complexity is approximately 2.4×10 56 ; [ 30 ] versus 4.6×10 46 for chess. [ 31 ] The game tree complexity is approximately 10 98 [ 32 ] versus 10 123 for chess. [ 33 ]
In 2002, Jing Yang, Simon Liao and Mirek Pawlak found an explicit winning strategy for the first player on Hex boards of size 7×7 using a decomposition method with a set of reusable local patterns. [ 34 ] They extended the method to weakly solve the center pair of topologically congruent openings on 8×8 boards in 2002 and the center opening on 9×9 boards in 2003. [ 35 ] In 2009, Philip Henderson, Broderick Arneson and Ryan B. Hayward completed the analysis of the 8×8 board with a computer search, solving all the possible openings. [ 36 ] In 2013, Jakub Pawlewicz and Ryan B. Hayward solved all openings for 9×9 boards, and one (the most-central) opening move on the 10×10 board. [ 37 ] Since Gardner first postulated in his column in Scientific American in 1957, albeit speciously, that any first play on the short diagonal is a winning play, [ 38 ] for all solved game boards up to n=9, that has indeed been the case. In addition, for all boards except n=2 and n=4, there have been numerous additional winning first moves; the number of winning first moves generally is ≥ n²/2.
Other connection games with similar objectives but different structures include Shannon switching game (also known as Gale and Bridg-It) and TwixT . Both of these bear some degree of similarity to the ancient Chinese game of Go .
The game may be played on a rectangular grid like a chess, checker or go board, by considering that spaces (intersections in the case of go) are connected in one diagonal direction but not the other. The game may be played with paper and pencil on a rectangular array of dots or graph paper in the same way by using two different colored pencils.
Popular dimensions other than the standard 11×11 are 13×13 and 19×19 as a result of the game's relationship to the older game of Go . According to the book A Beautiful Mind , John Nash (one of the game's inventors) advocated 14×14 as the optimal size.
The misère variant of Hex is called "Rex", in which each player tries to force their opponent to make a chain. Rex is slower than Hex since, on any empty board with equal dimensions, the losing player can delay a loss until the entire board is full. [ 39 ] On boards with unequal dimensions, the player whose sides are further apart can win regardless of who plays first. [ 40 ] On boards with equal dimensions, the first player can win on a board with an even number of cells per side, and the second player can win on a board with an odd number. [ 41 ] [ 42 ] On boards with an even number, one of the first player's winning moves is always to place a stone in the acute corner. [ 39 ]
Hex had an incarnation as the question board from the television game show Blockbusters . In order to play a "move", contestants had to answer a question correctly. The board had 5 alternating columns of 4 hexagons; the solo player could connect top-to-bottom in 4 moves, while the team of two could connect left-to-right in 5 moves.
The game of Y is Hex played on a triangular grid of hexagons; the object is for either player to connect all three sides of the triangle. Y is a generalization of Hex to the extent that any position on a Hex board can be represented as an equivalent position on a larger Y board.
Havannah is a game based on Hex. [ 43 ] It too has a board space composed of hexagonal tiles, however the board is itself in the shape of a large hexagon, and a win is achieved by forming one of three patterns.
Projex is a variation of Hex played on a real projective plane , where the players have the goal of creating a noncontractible loop. [ 44 ] Like in Hex, there are no ties, and there is no position in which both players have a winning connection.
Dark Hex (also known as Phantom Hex) is an imperfect information version of Hex. [ 45 ] The players are not exposed to each other's stones at any point in the game unless they discover them first. The game is played in the presence of an umpire where each player first verifies the move if its a collision or not. Based on the continuation of this point the game has different versions.
As of 2016, there were over-the-board tournaments reported from Brazil, Czech Republic, Denmark, France, Germany, Italy, Netherlands, Norway, Poland, Portugal, Spain, UK and the US. [ citation needed ] One of the largest Hex competitions is organized by the International Committee of Mathematical Games in Paris, France, which is annually held since 2013. [ citation needed ] Hex is also part of the Computer Olympiad . [ 46 ] During this competition the pie rule is used. | https://en.wikipedia.org/wiki/Hex_(board_game) |
[Co(NH 3 ) 5 (H 2 O)]Cl 3 [Co(NH 3 ) 5 Cl]Cl 2
Hexaamminecobalt(III) chloride is the chemical compound with the formula [Co(NH 3 ) 6 ]Cl 3 . It is the chloride salt of the coordination complex [Co(NH 3 ) 6 ] 3+ , which is considered an archetypal "Werner complex", named after the pioneer of coordination chemistry, Alfred Werner . The cation itself is a metal ammine complex with six ammonia ligands attached to the cobalt (III) ion.
[Co(NH 3 ) 6 ] 3+ is diamagnetic, with a low-spin 3d 6 octahedral Co(III) center. The cation obeys the 18-electron rule and is considered to be a classic example of an exchange inert metal complex. As a manifestation of its inertness, [Co(NH 3 ) 6 ]Cl 3 can be recrystallized unchanged from concentrated hydrochloric acid : the NH 3 is so tightly bound to the Co(III) centers that it does not dissociate to allow its protonation. [ 1 ] In contrast, labile metal ammine complexes, such as [Ni(NH 3 ) 6 ]Cl 2 , react rapidly with acids, reflecting the lability of the Ni(II)–NH 3 bonds. Upon heating, hexamminecobalt(III) begins to lose some of its ammine ligands, eventually producing a stronger oxidant.
The chloride ions in [Co(NH 3 ) 6 ]Cl 3 can be exchanged with a variety of other anions such as nitrate , bromide , iodide , sulfamate to afford the corresponding [Co(NH 3 ) 6 ]X 3 derivative. Such salts are orange or bright yellow and display varying degrees of water solubility. The chloride ion can be also exchanged with more complex anions such as the hexathiocyanatochromate(III), yielding a pink compound with formula [Co(NH 3 ) 6 ] [Cr(SCN) 6 ], or the ferricyanide ion. [ citation needed ]
[Co(NH 3 ) 6 ]Cl 3 is prepared by treating cobalt(II) chloride with ammonia and ammonium chloride followed by oxidation. Oxidants include hydrogen peroxide or oxygen in the presence of charcoal catalyst. [ 1 ] This salt appears to have been first reported by Fremy. [ 2 ]
The acetate salt can be prepared by aerobic oxidation of cobalt(II) acetate , ammonium acetate , and ammonia in methanol. [ 3 ] The acetate salt is highly water-soluble to the level of 1.9 M (20 °C), versus 0.26 M for the trichloride.
[Co(NH 3 ) 6 ] 3+ is a component of some structural biology methods (especially for DNA or RNA , where positive ions stabilize tertiary structure of the phosphate backbone), to help solve their structures by X-ray crystallography [ 4 ] or by nuclear magnetic resonance . [ 5 ] In the biological system, the counterions would more probably be Mg 2+ , but the heavy atoms of cobalt (or sometimes iridium , as in PDB : 2GIS ) provide anomalous scattering to solve the phase problem and produce an electron-density map of the structure. [ 6 ]
[Co(NH 3 ) 6 ] 3+ is used to investigate DNA . The cation induces the transition of DNA structure from the classical B-form to the Z-form. [ 7 ] | https://en.wikipedia.org/wiki/Hexaamminecobalt(III)_chloride |
Hexaamminenickel chloride is the chemical compound with the formula [Ni(NH 3 ) 6 ]Cl 2 . It is the chloride salt of the metal ammine complex [Ni(NH 3 ) 6 ] 2+ . The cation features six ammonia (called ammines in coordination chemistry) ligands attached to the nickel (II) ion. [ 1 ]
[Ni(NH 3 ) 6 ] 2+ , like all octahedral nickel(II) complexes, is paramagnetic with two unpaired electrons localized on each Ni center. [Ni(NH 3 ) 6 ]Cl 2 is prepared by treating aqueous nickel(II) chloride with ammonia . It is useful as a molecular source of anhydrous nickel(II). [ 2 ]
One commercial method for extraction of nickel from its sulfide ores involves the sulfate salt of [Ni(NH 3 ) 6 ] 2+ . In this process, the partially purified ore is treated with air and ammonia as described with this simplified equation: [ 3 ] | https://en.wikipedia.org/wiki/Hexaamminenickel_chloride |
Hexaammineplatinum(IV) chloride is the chemical compound with the formula [Pt(NH 3 ) 6 ]Cl 4 . It is the chloride salt of the metal ammine complex [Pt(NH 3 ) 6 ] 4+ . The cation features six ammonia (called ammines in coordination chemistry) ligands attached to the platinum (IV) ion. It is a white, water soluble solid.
Typical for platinum(IV) complexes, [Pt(NH 3 ) 6 ] 4+ is diamagnetic and kinetically inert, e.g. unaffected by strong acids. The cation obeys the 18-electron rule . It is prepared by treatment of methylamine complex [Pt(NH 2 CH 3 ) 4 Cl 2 ]Cl 2 with ammonia . [ 1 ]
The complex [Pt(NH 3 ) 6 ] 4+ is a rare example of a tetracationic ammine complex. Its conjugate bases [Pt(NH 3 ) 5 NH 2 ] 3+ and [Pt(NH 3 ) 4 (NH 2 ) 2 ] 2+ have been characterized. [ 2 ] | https://en.wikipedia.org/wiki/Hexaammineplatinum(IV)_chloride |
Hexaarylbiimidazoles ( HABIs ) are a class of organic compounds that are imidazole derivatives. [ 1 ] In their natural state, HABIs are typically colorless, but when ultraviolet light breaks the bond connecting the two imidazole groups in the molecule, it produces a version that is dark blue. The transformation takes ten seconds or longer. [ 2 ] By adding naphthalene to the compound, the color transition can be made in about 180 milliseconds. The cyclophane version of HABI reverts to colorless just as fast as the UV light is off.
This article about a heterocyclic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexaarylbiimidazole |
Hexabromocyclododecane ( HBCD or HBCDD ) is a brominated flame retardant . It consists of twelve carbon , eighteen hydrogen , and six bromine atoms tied to the ring. Its primary application is in extruded (XPS) and expanded (EPS) polystyrene foam used as thermal insulation in construction. Other uses are upholstered furniture, automobile interior textiles, car cushions and insulation blocks in trucks, packaging material, video cassette recorder housing, and electric and electronic equipment. According to UNEP, "HBCD is produced in China, Europe, Japan, and the USA. The last known current annual production is approximately 28,000 tonnes per year. The main share of the market volume is used in Europe and China" (figures from 2009 to 2010). [ 2 ] Due to its persistence , toxicity, and ecotoxicity , the Stockholm Convention on Persistent Organic Pollutants decided in May 2013 to list hexabromocyclododecane in Annex A to the convention with specific exemptions for production and use in expanded polystyrene and extruded polystyrene in buildings. Because HBCD has 16 possible stereo-isomers with different biological activities, the substance poses a difficult problem for manufacture and regulation. It is synthesized via the bromination of cyclododecatriene . [ 3 ] [ 4 ]
HBCD's toxicity and its harm to the environment are current sources of concern. HBCD can be found in environmental samples such as birds, mammals, fish, and other aquatic organisms as well as soil and sediment. [ 5 ] On this basis, on 28 October 2008, the European Chemicals Agency decided to include HBCD in the SVHC list, [ 6 ] Substances of Very High Concern, within the Registration, Evaluation, Authorisation and Restriction of Chemicals framework. On 18 February 2011, HBCD was listed in Annex XIV of REACH and hence is subject to Authorisation. HBCD can be used until the so-called “sunset date” (21 August 2015). After that date, only authorized applications will be allowed in the EU.
HBCD has been found widely present in biological samples from remote areas and supporting pieces of evidence for its classification as Persistent, Bioaccumulative, and Toxic (PBT) and undergoes long-range environmental transportation. [ 7 ] In July 2012, an EU-harmonized classification and labeling for HBCD entered into force. HBCD has been classified as a category 2 for reproductive toxicity. [ 8 ] Since August 2010 hexabromocyclododecanes are included in the EPA 's List of Chemicals of Concern. [ 9 ] Japan was the first country to implement a ban on the import and production of HBCD effective in May 2014. The United States EPA began the process of regulating HBCD in 2020 releasing it final evaluation of the chemicals and confirmed its health and environmental risks in 2022. [ 10 ]
Because HBCD has 16 possible stereo-isomers with different biological activities, the substance poses a difficult problem for manufacture and regulation. [ 11 ] The HBCD commercial mixture is composed of three main diastereomers denoted as alpha (α-HBCD), beta (β-HBCD), and gamma (γ-HBCD) with traces of others. A series of four published in vivo mice studies were conducted between several federal and academic institutions to characterize the toxicokinetic profiles of individual HBCD stereoisomers. The predominant diastereomer in the HBCD mixture, γ-HBCD, undergoes rapid hepatic metabolism, fecal and urinary elimination, and biological conversion to other diastereomers with a short biological half-life of 1–4 days. After oral exposure to the γ-HBCD diastereomer, β-HBCD was detected in the liver and brain, and α-HBCD and β-HBCD was detected in the fat and feces [ 12 ] with multiple novel metabolites identified - monohydroxy-pentabromocyclododecane, monohydroxy-pentabromocyclododecene, dihydroxy-pentabromocyclododecene, and dihydroxy-pentabromocyclododecadiene. [ 13 ] In contrast, α-HBCD is more biologically persistent, resistant to metabolism, bioaccumulates in lipid-rich tissues after a 10-day repeated exposure study, and has a longer biological half-life of up to 21 days; only α-HBCD was detected in the liver, brain, fat and feces with no stereoisomerization to γ-HBCD or β-HBCD and low trace levels of four different hydroxylated metabolites were identified. [ 14 ] Developing mice had higher HBCD tissue levels than adult mice after exposure to either α-HBCD or γ-HBCD indicating the potential for increased susceptibility of the developing young to HBCD effects. [ 15 ] The reported toxicokinetic differences of individual HBCD diastereoisomers have important implications for the extrapolation of toxicological studies of the commercial HBCD mixture to the assessment of human risk .
As of 2012, there was a large and still increasing stock of HBCD in the anthroposphere , mainly in EPS and XPS insulation boards. [ 16 ] A long-term environmental monitoring program run by the Fraunhofer Institute for Molecular Biology and Applied Ecology demonstrates a general trend that HBCD concentrations are decreasing over time. [ 17 ] HBCD emissions into the environment are limited under the voluntary industry emission management program: the Voluntary Emissions Control Action Programme (VECAP). [ 18 ] The VECAP annual report demonstrated a continuous decrease of potential emissions of HBCD to the environment in Europe. [ 19 ]
Due to its persistence , bioaccumulation, toxicity/ ecotoxicity and long-range environmental transport, the Stockholm Convention on Persistent Organic Pollutants decided in May 2013 to list hexabromocyclododecane in Annex A to the convention with specific exemptions for production and use in expanded polystyrene (EPS) and extruded polystyrene (XPS) in buildings. The listing entered in force on 26 November 2014 for most countries. [ 20 ] [ 21 ] Countries could choose to use this exemption for up to five years after the entry into force. This possibility was used by a number of countries. [ 22 ] | https://en.wikipedia.org/wiki/Hexabromocyclododecane |
Hexachlorobenzene , or perchlorobenzene , is an aryl chloride and a six-substituted chlorobenzene with the molecular formula C 6 Cl 6 . It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt . Its use has been banned globally under the Stockholm Convention on Persistent Organic Pollutants . [ 6 ]
Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. [ 7 ] It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid , other chlorinated compounds (such as phosgene ), carbon monoxide , and carbon dioxide . [ 8 ]
Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. [ 9 ] In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride , he then suggested that his compound was the same as Julin's chloride of carbon. [ 10 ] Müller previously also believed it was the same compound as Michael Faraday 's "perchloride of carbon" ( Hexachloroethane ), obtained a small sample of Julin's chloride of carbon to send to Richard Phillips and Faraday for investigation. [ 9 ] In 1867, Henry Bassett proved that the compound produced from benzene and antimony was the same as Julian's carbon chloride and named it "hexachlorobenzene". [ 10 ] [ 9 ]
Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre . [ 9 ]
Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube. [ 9 ]
Large-scale manufacture for use as a fungicide was developed by using the residue remaining after purification of the mixture of isomers of hexachlorocyclohexane, from which the insecticide lindane (the γ- isomer ) had been removed, leaving the unwanted α- and β- isomers. This mixture is produced when benzene is reacted with chlorine in the presence of ultraviolet light (e.g. from sunlight). [ 11 ] [ 12 ] However, manufacture is no longer practiced following the compound's ban.
Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorobenzenes. [ 13 ] A typical catalyst is ferric chloride . Much milder reagents than chlorine (e.g. dichlorine monoxide , iodine in chlorosulfonic acid ) also suffice, and the various hexachlorocyclohexanes can substitute for benzene as well. [ 14 ]
Hexachlorobenzene was used in agriculture to control the fungus tilletia caries (common bunt of wheat). It is also effective on tilletia controversa , dwarf bunt. The compound was introduced in 1947, normally formulated as a seed dressing but is now banned in many countries. [ 15 ]
A minor industrial phloroglucinol synthesis nucleophilically substitutes hexachlorobenzene with alkoxides , followed by acidic workup. [ 16 ]
HCB production peaked at around 100,000 tons per year in the late 1970's. Since then, usage has been declining steadily, with production of less than 90 tons per year by the mid 1990s. [ 17 ] The half-life in soil is estimated to be 9 years. [ 17 ] The mechanism of its toxicity and other adverse effects remain under study. [ 18 ]
Hexachlorobenzene can react violently with dimethylformamide , particularly in the presence of catalytic transition-metal salts. [ 19 ]
Material has relatively low acute toxicity but is toxic because of its persistent and cumulative nature in body tissues in rich lipid content. [ citation needed ]
Hexachlorobenzene is an animal carcinogen and is considered to be a probable human carcinogen. [ 20 ] After its introduction as a fungicide in 1945, for crop seeds, this toxic chemical was found in all food types. [ citation needed ] Hexachlorobenzene was banned from use in the United States in 1966. [ citation needed ]
This material has been classified by the International Agency for Research on Cancer (IARC) as a Group 2B carcinogen (possibly carcinogenic to humans). Animal carcinogenicity data for hexachlorobenzene show increased incidences of liver , kidney (renal tubular tumours) and thyroid cancers . [ 21 ] Chronic oral exposure in humans has been shown to give rise to a liver disease ( porphyria cutanea tarda ), skin lesions with discoloration, ulceration , photosensitivity , thyroid effects, bone effects and loss of hair. Neurological changes have been reported in rodents exposed to hexachlorobenzene. Hexachlorobenzene may cause embryolethality and teratogenic effects. Human and animal studies have demonstrated that hexachlorobenzene crosses the placenta to accumulate in foetal tissues and is transferred in breast milk . [ citation needed ]
HCB is very toxic to aquatic organisms . It may cause long term adverse effects in the aquatic environment . Therefore, release into waterways should be avoided. It is persistent in the environment. Ecological investigations have found that biomagnification up the food chain does occur. Hexachlorobenzene has a half life in the soil of between 3 and 6 years. Risk of bioaccumulation in an aquatic species is high. [ citation needed ]
In Anatolia , Turkey between 1955 and 1959, during a period when bread wheat was unavailable, 500 people were fatally poisoned and more than 4,000 people fell ill by eating bread made with HCB-treated seed that was intended for agriculture use. Most of the sick were affected with a liver condition called porphyria cutanea tarda , which disturbs the metabolism of hemoglobin and results in skin lesions. Almost all breastfeeding children under the age of two, whose mothers had eaten tainted bread, died from a condition called "pembe yara" or "pink sore", most likely from high doses of HCB in the breast milk. [ 22 ] In one mother's breast milk the HCB level was found to be 20 parts per million in lipid, approximately 2,000 times the average levels of contamination found in breast-milk samples around the world. [ 23 ] [ 24 ] Follow-up studies 20 to 30 years after the poisoning found average HCB levels in breast milk were still more than seven times the average for unexposed women in that part of the world (56 specimens of human milk obtained from mothers with porphyria, average value was 0.51 ppm in HCB-exposed patients compared to 0.07 ppm in unexposed controls), [ 25 ] [ 26 ] and 150 times the level allowed in cow's milk. [ 27 ]
In the same follow-up study of 252 patients (162 males and 90 females, avg. current age of 35.7 years), 20–30 years' postexposure, many subjects had dermatologic, neurologic, and orthopedic symptoms and signs. The observed clinical findings include scarring of the face and hands (83.7%), hyperpigmentation (65%), hypertrichosis (44.8%), pinched faces (40.1%), painless arthritis (70.2%), small hands (66.6%), sensory shading (60.6%), myotonia (37.9%), cogwheeling (41.9%), enlarged thyroid (34.9%), and enlarged liver (4.8%). Urine and stool porphyrin levels were determined in all patients, and 17 have at least one of the porphyrins elevated. Offspring of mothers with three decades of HCB-induced porphyria appear normal. [ 25 ]
Cited works
Additional references | https://en.wikipedia.org/wiki/Hexachlorobenzene |
Hexachlorobutadiene , (often abbreviated as "HCBD") Cl 2 C=C(Cl)C(Cl)=CCl 2 , is a colorless liquid at room temperature that has an odor similar to that of turpentine . It is a chlorinated aliphatic diene with niche applications but is most commonly used as a solvent for other chlorine-containing compounds. [ 2 ] [ 3 ] Structurally, it has a 1,3-butadiene core, but fully substituted with chlorine atoms.
Hexachlorobutadiene is primarily produced in chlorinolysis plants as a by-product in the production of carbon tetrachloride and tetrachloroethene . Chlorinolysis is a radical chain reaction that occurs when hydrocarbons are exposed to chlorine gas under pyrolytic conditions. The hydrocarbon is chlorinated and the resulting chlorocarbons are broken down. This process is analogous to combustion, but with chlorine instead of oxygen. [ 2 ] [ 4 ]
Hexachlorobutadiene occurs as a by-product during the chlorinolysis of butane derivatives in the production of both carbon tetrachloride and tetrachloroethene. These two commodities are manufactured on such a large scale, that enough HCBD can generally be obtained to meet the industrial demand. Alternatively, hexachlorobutadiene can be directly synthesized via the chlorination of butane or butadiene . [ 2 ] [ 3 ]
The products of chlorinolysis reactions heavily depend upon both the temperature and pressure under which the reaction occurs. Thus, by adjusting these reaction conditions in the presence of chlorine gas, hexachlorobutadiene can be even further chlorinated to give tetrachloroethylene , hexachloroethane , octachlorobutene, and even decachlorobutane. In general, increasing the number of chlorine substituents on a compound increases its toxicity but decreases its combustibility. Chlorination via carbon skeleton cleavage is thermodynamically preferred, whereas chlorinated C 4 products are favored at lower temperatures and pressures. The three chlorinolysis products of hexachlorobutadiene are shown in the reactions below. [ 3 ]
One of the primary applications of hexachlorobutadiene is as a solvent for chlorine, a good example of the common aphorism "like dissolves like." The molar solubility of chlorine in HCBD at 0 °C is around 34% (2.17 mol/L). The solubility of another chlorine solvent, carbon tetrachloride, at 0 °C is about 30% (3.11 mol/L). One mole of C 4 Cl 6 can dissolve more chlorine than one mole of CCl 4 , but the molecular weight difference between the two solvents is such that per liter of solvent, more chlorine can be dissolved in carbon tetrachloride. Shown below is the molar solubility of hexachlorobutadiene compared to carbon tetrachloride at various temperatures. [ 2 ] [ 4 ]
of HCBD
of CCl 4
Just like chlorine, many other chlorine-containing compounds can be readily dissolved in a solution of hexachlorobutadiene. As a solvent, it is unreactive toward common acids and select non-nucleophilic bases. An illustrative application HCBD as a solvent is the FeCl 3 -catalyzed chlorination of toluene to give pentachloromethylbenzene. Hexachlorobutadiene is used exclusively over carbon tetrachloride in this reaction because ferric chloride (FeCl 3 ) is insoluble in CCl 4 . [ 5 ] [ 6 ]
Given its affinity for chlorinated compounds, liquid HCBD is used as a scrubber in order to remove chlorine containing contaminants from gas streams. An example of this application is its use in the production of HCl gas as the primary contaminants, especially Cl 2 , are more soluble in hexachlorobutadiene than the gaseous hydrogen chloride. [ 2 ]
In IR spectroscopy, hexachlorobutadiene is occasionally used as a mull in order to analyze the stretching frequencies of C-H stretching bands. The usual mulling agent, Nujol , is a hydrocarbon and thus exhibits C-H stretching bands that can interfere with the signal from the sample. Since HCBD contains no C-H bonds, it can be used instead to obtain this portion of the IR spectrum. Unfortunately, some organometallic compounds react with HCBD, and therefore, care must be taken when selecting it as a mulling agent so as not to destroy the sample. [ 7 ]
Hexachlorobutadiene has yet another, albeit somewhat dated, application as an algicide in industrial cooling systems. Although HCBD is a potent herbicide, in recent years, this particular application has been discouraged due to the high toxicity of the compound at low concentrations. [ 2 ] [ 8 ] [ 9 ]
Hexachlorobutadiene has been observed to produce systemic toxicity following exposure via oral, inhalation, and dermal routes. Effects may include fatty liver degeneration, epithelial necrotizing nephritis, central nervous system depression and cyanosis. [ 10 ]
The United States Environmental Protection Agency [ 11 ] has classified hexachlorobutadiene as a group C Possible Human Carcinogen. The American Conference of Governmental and Industrial Hygienists has classified hexachlorobutadiene as an A3 Confirmed Animal Carcinogen with Unknown Relevance to Humans. [ 12 ] The National Institute for Occupational Safety and Health has set a recommended exposure limit at 0.02 ppm over an eight-hour workday. [ 13 ] | https://en.wikipedia.org/wiki/Hexachlorobutadiene |
Hexachlorophene , also known as Nabac , is an organochlorine compound that was once widely used as a disinfectant . The compound occurs as a white odorless solid, although commercial samples can be off-white and possess a slightly phenolic odor. It is insoluble in water but dissolves in acetone , ethanol , diethyl ether , and chloroform . In medicine , hexachlorophene is useful as a topical anti-infective and anti-bacterial agent. It is also used in agriculture as a soil fungicide , plant bactericide , and acaricide . [ 1 ]
Hexacholorophene is produced by alkylation of 2,4,5- trichlorophenol with formaldehyde . Related antiseptics are prepared similarly, e.g., bromochlorophene and dichlorophene . [ 1 ]
The LD50 (oral, rat) is 59 mg/kg, indicating that the compound is relatively toxic. It is not mutagenic nor teratogenic according to Ullmann's Encyclopedia, [ 1 ] but "embryotoxic and produces some teratogenic effects" according to the International Agency for Research on Cancer. [ 2 ] 2,3,7,8-Tetrachlorodibenzodioxin (TCDD) is always a contaminant in this compound's production. Several accidents releasing many kilograms of TCDD have been reported. The reaction between 2,4,5- trichlorophenol and formaldehyde is exothermic. If the reaction occurs without adequate cooling, TCDD is produced in significant quantities as a byproduct and contaminant. The Seveso disaster and the Times Beach, Missouri , contamination incident exemplify the industrial hazards of hexachlorophene production. [ citation needed ]
In 1972, the "Bébé" brand of baby powder in France killed 39 babies. It also did great damage to the central nervous systems of several hundred other babies. The batch of toxic "Bébé" brand of powder was mistakenly manufactured with 6% hexachlorophene. This industrial accident directly led to the removal of hexachlorophene from consumer products worldwide. [ 3 ] [ 4 ]
In 1972, the U.S. Food and Drug Administration (FDA) halted production and distribution of products containing more than 1% hexachlorophene. [ 5 ] After that change, most products containing hexachlorophene were available only with a doctor's prescription. [ 6 ] The restrictions were enacted after 15 deaths in the United States, and the 39 deaths in France mentioned above, were reported following brain damage caused by hexachlorophene. [ 7 ]
Several companies manufactured over-the-counter preparations which utilised hexachlorophene in their formulations. One product, Baby Magic Bath by The Mennen Company , was recalled in 1971, and removed from retail distribution. [ citation needed ]
Two commercial preparations using hexachlorophene, pHisoDerm and pHisoHex , were widely used as antibacterial skin cleansers in the treatment of acne , (with pHisoDerm developed for those allergic to the active ingredients in pHisoHex ). During the 1960s, both were available over the counter in the US. After the ban, pHisoDerm was reformulated without hexachlorophene, and continued to be sold over-the-counter, while pHisoHex , (which contained 3% hexachlorophene - 3 times the legal limit imposed in 1972), [ 7 ] became available as a prescription body wash. In the European Community countries during the 1970s and 1980s, pHisoHex remained available over the counter. A related product, pHisoAc , was used as a skin mask to dry and peel away acne lesions whilst pHiso-Scrub , a hexachlorophene-impregnated sponge for scrubbing, has since been discontinued. Several substitute products (including triclosan ) were developed, but none had the germ-killing capability of hexachlorophene. ( Sanofi-Aventis became the sole European manufacturer of pHisoHex , while The Mentholatum Company owns the pHisoDerm brand today. Sanofi-Aventis discontinued production of several forms of pHisoHex in August 2009 and discontinued all production of pHisoHex in September 2013). [ 8 ]
The formula for Dial soap was modified to remove hexachlorophene after the FDA ended over-the-counter availability in 1972. [ 6 ]
Bristol-Myers' discontinued Ipana toothpaste brand at one time contained hexachlorophene. Another U.S.A. brand of toothpaste containing hexachlorophene in the early 1960's was Stripe. [ 9 ]
In Germany, cosmetics containing hexachlorophene have been banned since 1985. [ citation needed ]
In Austria, the sale of drugs containing the substance has been banned since 1990. [ 10 ]
Trade names for hexachlorophene include: Acigena , Almederm , AT7 (dial soap), AT17 , Bilevon , Exofene , Fostril , Gamophen , G-11 , Germa-Medica , Hexosan , K-34 , Septisol , Surofene , M3 . [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Hexachlorophene |
In coding theory , the hexacode is a length 6 linear code of dimension 3 over the Galois field G F ( 4 ) = { 0 , 1 , ω , ω 2 } {\displaystyle GF(4)=\{0,1,\omega ,\omega ^{2}\}} of 4 elements defined by
It is a 3-dimensional subspace of the vector space of dimension 6 over G F ( 4 ) {\displaystyle GF(4)} .
Then H {\displaystyle H} contains 45 codewords of weight 4, 18 codewords of weight 6 and
the zero word. The full automorphism group of the hexacode is 3. A 6 {\displaystyle 3.A_{6}} . [ 1 ] The hexacode can be used to describe the Miracle Octad Generator of R. T. Curtis. | https://en.wikipedia.org/wiki/Hexacode |
Hexacyclinol is a natural metabolite of a fungus , Panus rudis . Significant controversy surrounded its proposed structure until its total synthesis by John Porco, Jr. in 2006.
Natural products chemist Udo Gräfe collected a sample of P. rudis HKI 0254 from a dead log in Siberia from which hexacyclinol was isolated. His group's 2002 paper showed that the compound behaved as an antiproliferative drug against cancer cell lines and proposed a structure (2) for the compound. [ 1 ]
An initial total synthesis was published by James J. La Clair in 2006, purporting a synthesis of Gräfe's proposed structure based on 1 H nuclear magnetic resonance ( NMR ) spectra. [ 2 ] Natural products chemist Scott D. Rychnovsky simulated the 13 C nuclear magnetic resonance spectrum of the structure proposed by Gräfe and found that it did not correspond to the spectrum of the structure allegedly synthesized by La Clair. Rychnovsky proposed a different structure (1) based on panepophenanthrin, another molecule isolated from a different strain of P. rudis . [ 3 ] The scientific community then began criticizing La Clair's work, claiming that his work was sloppy or that he fabricated data. [ 4 ] La Clair's publication of his purported synthesis was retracted in 2012, citing a lack of validation of its claims. [ 5 ]
In 2006, a group led by John Porco, Jr. synthesized Rychnovsky's proposed structure. They showed that the 1 H- and 13 C-NMR spectra matched that of the compound isolated by Gräfe, confirming Rychnovsky's structure. [ 6 ] La Clair claimed that since the two structures were isomers , it is possible that they would have similar 1 H-NMR spectra. [ 4 ] However, a later paper by Saielli and Bagno claims that there would be significant differences in the 1 H- and 13 C-NMR spectra of compounds (1) and (2) . [ 7 ]
The controversy was covered extensively by a number of science blogs . [ 8 ]
In response to the controversy, Nobel Prize -winning synthetic chemist E.J. Corey remarked, "Occasionally, blatantly wrong science is published, and to the credit of synthetic chemistry, the corrections usually come quickly and cleanly." [ 4 ] | https://en.wikipedia.org/wiki/Hexacyclinol |
In organic chemistry , the hexadehydro-Diels–Alder ( HDDA ) reaction is an organic chemical reaction between a diyne (2 alkyne functional groups arranged in a conjugated system ) and an alkyne to form a reactive benzyne species, via a [4+2] cycloaddition reaction. [ 1 ] [ 2 ] [ 3 ] This benzyne intermediate then reacts with a suitable trapping agent to form a substituted aromatic product. This reaction is a derivative of the established Diels–Alder reaction and proceeds via a similar [4+2] cycloaddition mechanism. The HDDA reaction is particularly effective for forming heavily functionalized aromatic systems and multiple ring systems in one synthetic step.
Depending on the substrate chosen, the HDDA reaction can be initiated thermally or by the addition of a suitable catalyst , often a transition metal . [ 1 ] [ 2 ] [ 4 ] [ 5 ] The prevailing mechanism for the thermally-initiated HDDA reaction is a [4+2] cycloaddition between a conjugated diyne (1,3-dialkyne) and an alkyne (often referred to as a diynophile in analogy to the Diels–Alder dienophile ) to form an ortho - benzyne species. [ 1 ] [ 2 ] The metal-catalyzed HDDA is thought to proceed through a similar pathway, forming a metal-stabilized benzyne, which is then trapped.
The simplest model of an HDDA reaction is the cycloaddition of butadiyne and acetylene to form ortho-benzyne (o-benzyne, shown below). [ 6 ] This reactive intermediate (denoted by brackets) subsequently reacts with a generalized trapping reagent that consists of a nucleophilic (Nu-) and electrophilic (El-) site, giving the benzenoid product shown.
The o-benzyne intermediate can be visualized in the two resonance (chemistry) forms illustrated above. The most commonly depicted form is the alkyne ( 1 ), but the cumulene ( 1’ ) form can be helpful in visualizing ring formation by [4+2] cycloaddition.
The HDDA reaction is often thermodynamically favorable ( exothermic ), but can have a significant kinetic barrier to reaction (high activation energy ). Calculations have suggested that the formation of unsubstituted o-benzyne (from butadiyne and acetylene, above) has an activation energy of 36 kcal mol −1 , but is thermodynamically favorable, estimated to be exothermic by -51 kcal mol −1 . [ 6 ] As a result of higher activation energy, some HDDA reactions require heating to elevated temperatures (>100 °C) in order to initiate. [ 1 ] [ 2 ]
Furthermore, the benzyne trapping step is also thermodynamically favourable, calculated to be an additional -73 kcal mol −1 for trapping of an ester-substituted o-benzyne with tert-butanol . [ 1 ]
The HDDA [4+2] cycloaddition can occur via either a concerted pathway or a stepwise reaction , diradical pathway. These two pathways can differ in activation energy depending on substrate and reaction system. Computational studies have suggested that while both pathways are comparable in activation energy for unactivated (unsubstituted) diynophiles, the stepwise pathway has a lower activation energy barrier, and so is the dominant pathway, for activated diynophiles. [ 6 ] [ 7 ]
The regiochemistry of non-symmetrical HDDA-derived benzyne trapping can be explained by a combination of electronic and ring distortion effects. [ 1 ] Computationally, the more obtuse angle ( a ) corresponds to the more electron deficient (δ+) benzyne carbon, leading to attack of the nucleophilic component at this site. Consequently, the electrophilic component adds at the more electron rich (δ-) site ( b ).
The HDDA reaction is a derivative of, and mechanistically related to, the classical Diels–Alder reaction. As described by Hoye and coworkers, the HDDA reaction can be viewed conceptually as a member of a series of pericyclic reactions with increasing unsaturation (by incremental removal of hydrogen pairs). [ 1 ] The “hexadehydro” descriptor is derived from this interpretation, as the simplest HDDA reaction product (o-benzyne, 4 hydrogens) has 6 fewer hydrogen atoms than the simplest Diels–Alder reaction product ( cyclohexene , 10 hydrogens).
Formally, the hexadehydro Diels–Alder reaction describes only the formation of the benzyne, but this species is an unstable intermediate that reacts readily with a variety of trapping partners, including reaction solvents . Thus, in practice the HDDA reaction describes a two-step cascade reaction of benzyne formation and trapping to yield the final product.
The first examples of the HDDA reaction were reported independently in 1997 by the groups of Ueda and Johnson. [ 2 ] [ 8 ] [ 9 ] [ 10 ] Johnson and co-workers observed the cyclization of 1,3,8-nonatriyne under flash vacuum thermolysis (600 °C, 10 −2 torr) to form two products, indane and the dehydrogenation product indene , in 95% combined yield. Deuterium labeling studies suggested that the product was formed by a [4+2] cycloaddition to a benzyne intermediate, followed by in-situ reduction to form the observed products. [ 8 ] Ueda and co-workers observed that acyclic tetraynes cyclized at room temperature to form 5H-fluorenol derivatives. The formation of a benzyne intermediate was determined by trapping studies, using benzene or anthracene to trap the benzyne as a Diels–Alder adduct. [ 10 ] Ueda and co-workers further elaborated on this method in subsequent reports, trapping the benzyne using a variety of nucleophiles (oxygen, nitrogen, and sulfur-based), as well as synthesizing larger, fused-ring aromatic systems. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
While known for over a decade, the HDDA reaction did not come into wider synthetic use until 2012, when Hoye and co-workers conducted a thorough investigation into the scope and utility of this cycloaddition. [ 1 ] That paper referred to this diyne–diynophile reaction as the “hexadehydro Diels–Alder (HDDA) reaction, and this terminology has since come into more widespread use. Since 2012, the HDDA reaction has been an area of renewed interest and has attracted further study by a number of research groups. [ 4 ] [ 5 ] [ 7 ] [ 16 ]
One of the main advantages of the HDDA reaction over other methods of accessing benzynes is the simplicity of the reaction system. HDDA reaction of triynes or tetraynes forms benzynes without the direct formation of by-products. In comparison, the formation of benzyne through removal of ortho-substituents on arenes results in stoichiometric amounts of byproducts from those substituents. For example, formation of benzyne from 1 mole of 2-trimethylsilylphenyl trifluoromethanesulfonate ( triflate ) produces 1 mole of trimethylsilyl fluoride and 1 mole of triflate ion. Byproducts can compete with other reagents for benzyne trapping, cause side-reactions, and may require additional purification.
Additionally, the HDDA reaction can be useful for substrates with sensitive functionality that might not be tolerated by other benzyne formation conditions (e.g. strong base). The thermally-initiated HDDA reaction has been shown to tolerate esters , ketones , protected amides , ethers , protected amines , aryl halides , alkyl halides , alkenes , and cyclopropanes . [ 1 ] [ 4 ] [ 17 ]
The HDDA reaction can fulfill several principles of green chemistry .
The HDDA reaction can be used to synthesize multi-cyclic ring systems from linear precursors containing the diyne, diynophile, and the trapping group. For example, Hoye and co-workers were able to synthesize fused, tricyclic ring systems from linear triyne precursors in one step and high yields via a thermally-initiated, intramolecular HDDA reaction. [ 1 ] Furthermore, both nitrogen- and oxygen-containing heterocycles could be incorporated by use of an appropriate precursor. In this case, the pendant silyl ether provided the trapping group, through a retro- Brook rearrangement .
HDDA-generated benzynes can also be trapped intermolecularly by a variety of trapping reagents. Careful choice of trapping reagent can add further functionality, including aryl halides, aryl heteroatoms ( phenols and aniline derivatives), and multiple ring systems. [ 1 ] [ 18 ]
The HDDA reaction can be used in a cascade reaction sequence with ene reactions , such as the Alder ene reaction and the aromatic ene reaction. [ 16 ] [ 19 ] The HDDA-generated benzyne can be trapped with a suitable ene donor that is covalently tethered to the benzyne. The benzyne serves as the enophile, while the ene can be an alkene (Alder ene) or an aromatic ring (aromatic ene). Lee and co-workers have shown an HDDA-Alder ene cascade reaction that can produce a variety of products, including medium-sized fused rings, spirocycles , and allenes . [ 16 ]
Hoye and co-workers demonstrated a thermally-initiated triple HDDA-aromatic ene-Alder ene cascade that leads to heavily functionalized products in one-step with no additional reagents or by-products. [ 19 ]
HDDA-derived benzynes have also been shown to dehydrogenate saturated alkanes to form alkenes . [ 20 ] In the absence of external trapping reagents, the benzyne intermediate can abstract vicinal (chemistry) hydrogen atoms from a suitable donor, often the reaction solvent (such as tetrahydrofuran or cyclooctane ). This desaturates the donor alkane, forming an alkene , and traps the benzyne to a dihydrobenzenoid product. Isotopic labelling and computational studies suggest that the double hydrogen transfer mechanism occurs by a concerted pathway and that the rate of reaction is highly dependent on the conformation of the alkane donor. [ 20 ] This reaction can be used to access 1,2,3,4-tetrasubstituted aromatic rings, a substitution pattern that can be difficult to access through other synthetic methodology.
The HDDA reaction can also be used as a method of C-H activation , where a pendant alkane C-H bond traps a metal-complexed aryne intermediate. Lee and co-workers observed that transition metal catalysts induced an HDDA reaction of tetraynes that was intramolecularly trapped by a pendant, sp 3 C-H bond. [ 4 ] Primary, secondary, and tertiary C-H bonds were all reactive trapping partners, with silver salts being the most effective catalysts. Deuterium labelling experiments suggest that the (sp 3 ) C-H bond breaking and (sp 2 ) C-H bond forming reactions occur in a concerted fashion.
The silver-catalyzed HDDA reaction has also been used to synthesize organofluorine compounds by use of a fluorine -containing counterion . [ 17 ] The metal-complexed aryne intermediate can be trapped by the counterion to produce aryl rings with fluoro, trifluoromethyl , or trifluoromethylthiol substituents. Unstable counterions, such as CF 3 − , can be produced in-situ.
Properly designed polyyne substrate has been shown to undergo efficient cascade net [4+2] cycloadditions merely upon being heated. [ 21 ] This domino hexadehydro Diels–Alder reaction is initiated by a rate-limiting benzyne formation. Proceeding through naphthyne, anthracyne, and/or tetracyne intermediates, rapid bottom-up synthesis of highly fused, polycyclic aromatic compounds results.
Nitriles can also participate in the HDDA reactions to generate pyridyne intermediates. [ 22 ] In situ capturing of pyridynes gives rise to highly substituted and functionalized pyridine derivatives, which is complementary to other classical approaches for construction of this important class of heterocycles.
Designer multi-ynes arrayed upon a common, central template undergo sequential, multiple cycloisomerization reactions to produce architecturally novel polycyclic compounds in a single operation. [ 23 ] Diverse product topologies are accessible, ranging from highly fused, polycyclic aromatic compounds (PACs) to architectures having structurally complex arms adorning central phenylene or expanded phenylene cores. | https://en.wikipedia.org/wiki/Hexadehydro_Diels–Alder_reaction |
Hexaferrum and epsilon iron (ε-Fe) are synonyms for the hexagonal close-packed (HCP) phase of iron that is stable only at extremely high pressure.
A 1964 study at the University of Rochester mixed 99.8% pure α-iron powder with sodium chloride , and pressed a 0.5-mm diameter pellet between the flat faces of two diamond anvils. The deformation of the NaCl lattice, as measured by x-ray diffraction (XRD), served as a pressure indicator. At a pressure of 13 GPa and room temperature, the body-centered cubic (BCC) ferrite powder transformed to the HCP phase in Figure 1. When the pressure was lowered, ε-Fe transformed back to ferrite (α-Fe) rapidly. A specific volume change of −0.20 cm 3 /mole ± 0.03 was measured. Hexaferrum, much like austenite , is more dense than ferrite at the phase boundary. A shock wave experiment confirmed the diamond anvil results. Epsilon was chosen for the new phase to correspond with the HCP form of cobalt . [ 1 ]
The triple point between the alpha, gamma and epsilon phases in the unary phase diagram of iron has been calculated as T = 770 K and P = 11 GPa, [ 2 ] although it was determined at a lower temperature of T = 750 K (477 °C) in Figure 1. The Pearson symbol for hexaferrum is hP2 and its space group is P6 3 /mmc . [ 3 ] [ 4 ]
Another study concerning the ferrite-hexaferrum transformation metallographically determined that it is a martensitic rather than equilibrium transformation. [ 5 ]
While hexaferrum is purely academic in metallurgical engineering , it may have significance in geology . The pressure and temperature of Earth's iron core are on the order of 150–350 GPa and 3000 ± 1000 °C. An extrapolation of the austenite-hexaferrum phase boundary in Figure 1 suggests hexaferrum could be stable or metastable in Earth's core. [ 1 ] For this reason, many experimental studies have investigated the properties of HCP iron under extreme pressures and temperatures. Figure 2 shows the compressional behaviour of ε-iron at room temperature up to a pressure as would be encountered halfway through the outer core of the Earth; there are no points at pressures lower than approximately 6 GPa, because this allotrope is not thermodynamically stable at low pressures but will slowly transform into α-iron. | https://en.wikipedia.org/wiki/Hexaferrum |
Hexafluoroethane is an organofluorine compound with the chemical formula C 2 F 6 . It is a non-flammable colorless odorless gas negligibly soluble in water and slightly soluble in methanol . Its structure is F 3 C−CF 3 . It is an extremely potent and long-lived greenhouse gas . It is the perfluorocarbon counterpart to the hydrocarbon ethane .
Hexafluoroethane's solid phase has two polymorphs . In the scientific literature, different phase transition temperatures have been stated. The latest works assign it at 103 K (−170 °C). Below 103 K it has a slightly disordered structure, and over the transition point, it has a body centered cubic structure. [ 1 ] The critical point is at 19.89 °C (293.04 K) and 30.39 bar . [ 2 ]
Table of densities:
Vapor density is 4.823 (air = 1), specific gravity at 21 °C is 4.773 (air = 1) and specific volume at 21 °C is 0.1748 m 3 /kg.
Hexafluoroethane is used as a versatile etchant in semiconductor manufacturing. It can be used for selective etching of metal silicides and oxides versus their metal substrates and also for etching of silicon dioxide over silicon . The primary aluminium and the semiconductor manufacturing industries are the major emitters of hexafluoroethane using the Hall-Héroult process .
Together with trifluoromethane it is used in refrigerants R508A (61%) and R508B (54%).
It is used as a tamponade to assist in retinal reattachment following vitreoretinal surgery . [ 3 ]
Due to the high energy of C−F bonds, hexafluoroethane is nearly inert and thus acts as an extremely stable greenhouse gas , with an atmospheric lifetime of 10,000 years (other sources: 500 years). [ 4 ] It has a global warming potential (GWP) of 9200 and an ozone depletion potential (ODP) of 0. Hexafluoroethane is included in the IPCC list of greenhouse gases .
Hexafluoroethane did not exist in significant amounts in the environment prior to industrial-scale manufacturing. Atmospheric concentration of hexafluoroethane reached 3 pptv at the start of the 21st century. [ 5 ] Its absorption bands in the infrared part of the spectrum cause a radiative forcing of about 0.001 W/m 2 .
Due to its high relative density , it gathers in low-lying areas, and at high concentrations it can cause asphyxiation . | https://en.wikipedia.org/wiki/Hexafluoroethane |
This page provides supplementary chemical data on Hexafluoroethane .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source such as SIRI , and follow its directions.
[1] | https://en.wikipedia.org/wiki/Hexafluoroethane_(data_page) |
Hexafluorophosphazene is an inorganic compound with the formula ( N P F 2 ) 3 . It takes the form of a white powder or lumps. It is sensitive to moisture and heat. [ 1 ]
The molecule has a cyclic , unsaturated P 3 N 3 backbone consisting of alternating phosphorus and nitrogen atoms, and can be viewed as a trimer of the hypothetical compound N≡PF 2 (phosphazyl difluoride). Its classification as a phosphazene highlights its relationship to benzene . Hexafluorophosphazene has a hexagonal P 3 N 3 ring with six equivalent P–N bonds. Each phosphorus atom is additionally bonded to two fluorine atoms. [ 2 ]
The molecule possesses D 3h symmetry , and each phosphorus center is tetrahedral .
The P 3 N 3 ring in hexachlorophosphazene deviates from planarity and is slightly ruffled (see chair conformation ). By contrast, the P 3 N 3 ring in hexafluorophosphazene is completely planar . [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexafluorophosphazene |
Hexagon Tower is a specialist science and technology facility located in Blackley , Manchester , England.
The site is a former Imperial Chemical Industries (ICI) research, development and production centre. Facilities at Blackley have played host to enterprises since 1785, before becoming an integral part of the British Dyestuffs Corporation after 1919 and then ICI in 1926.
Following a purchase in 2008, Hexagon Tower is now part of the Business Environments for Science and Technology (BEST) Network of UK science parks, managed by LaSalle Investment Management . [ 1 ]
The site in which Hexagon Tower is located has a long industrial heritage. The first enterprise to locate in the area was the Borelle Dyeworks, established by French émigré Louis Borelle in 1785 to produce the Turkey red dye. [ 2 ]
Following a period of decline, the site was taken over by another French expatriate, Angel Raphael Louis Delaunay, who arrived in the area at the turn of the 19th century to establish his own dyeing business in Blackley. [ 2 ]
Following the death of Delaunay's son and heir Louis, German chemical entrepreneur Ivan Levinstein bought the dyeworks in 1865, leading to a period of commercial success for the site. [ 2 ] Besides his dying business, Levinstein was also famed for opening the Sackville Street Building [ 3 ] and founding Wrexham Lager . [ 4 ]
In 1919, Levinstein's operation at Blackley merged with other chemical dyers to form the British Dyestuffs Corporation Limited, [ 2 ] before becoming part of ICI in 1926. [ 2 ]
The ICI was founded in December 1926 from the merger of four companies: Brunner Mond , Nobel Explosives , the United Alkali Company , and British Dyestuffs Corporation . [ 5 ] Since then, the Blackley Dyeworks site was integrated into the chemical giant's Specialty Chemicals division [ 2 ]
Under ICI stewardship of the Blackley Dyeworks, architect Richard Seifert was commissioned to build Hexagon Tower, with construction completed in 1973. [ 6 ] The 14-storey tower was named after the hexagon shaped windows based on the chemical compound Benzene , which is widely used in the creation of synthetic dyes. [ 7 ]
During its height in the 1960s, more than 14,000 people were employed at the site. [ 6 ] However, after growing competition from East Asian dye markets, the Specialty Chemicals division of ICI at Hexagon Tower passed ownership over to Zeneca in the mid-1990s.
Zeneca's move into pharmaceuticals saw Hexagon Tower become Avecia's international headquarters when it was bought by the company in 1999.
In 2008, LaSalle Investment Management – an independent subsidiary of Jones Lang LaSalle – purchased Hexagon Tower on behalf of a pension fund client.
LaSalle has turned the facility into a multi-let science park, accommodating a range of tenants from SMEs such as Colour Synthesis Solutions to multi-national firms, including Intertek . [ 8 ] [ 9 ]
The site now consists of 157,283 sq ft (14,612.1 m 2 ) of machine halls, laboratories and office space over 13 floors and a lower ground level with multi-purpose laboratory space. [ 10 ]
Hexagon Tower received an International Standards Organisation certification for its environmental responsibility in January 2014. [ citation needed ]
The site researched Triarylmethane dyes in the 1920s.
It worked with the University of Bradford , which opened its Chemistry and Chemical Technology Building in October 1967. [ 11 ]
There was extensive building in the late 1960s, with a £3 million new technical services centre for around 700 scientists. The site claimed to have the largest concentration of organic chemists in the Commonwealth. [ 12 ]
The son of a Director of Research was Sir James Baddiley FRS, who was the first to synthesise ATP .
The site won the Queen's Award for Export Achievement, and also for Technological Innovation in 1966. [ 13 ] It won the award in 1968 and 1969.
The site won the Queen's Award for Technological Achievement in 1990, [ 14 ] under ICI Colours and Fine Chemicals, for work on benzodifuranone-based dyes.
Zeneca LifeScience Molecules won the Queens Award for Technology in 1997 [ 15 ] and the award for Export in 1998. [ 16 ]
In 1999 Zeneca Metal Extraction won the Queen's Award for Environmental Achievement. [ 17 ] | https://en.wikipedia.org/wiki/Hexagon_Tower |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.