id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
5,251,514 | https://en.wikipedia.org/wiki/Heat%20of%20formation%20group%20additivity | Heat of formation group additivity methods in thermochemistry enable the calculation and prediction of heat of formation of organic compounds based on additivity. This method was pioneered by S. W. Benson.
Benson model
Starting with simple linear and branched alkanes and alkenes the method works by collecting a large number of experimental heat of formation data (see: Heat of Formation table) and then divide each molecule up into distinct groups each consisting of a central atom with multiple ligands:
X-(A)i(B)j(C)k(D)l
To each group is then assigned an empirical incremental value which is independent on its position inside the molecule and independent of the nature of its neighbors:
P primary C-(C)(H)3 -10.00
S secondary C-(C)2(H)2 -5.00
T tertiary C-(C)3(H) -2.40
Q quaternary C-(C)4 -0.10
gauche correction +0.80
1,5 pentane interference correction +1.60
in kcal/mol and 298 K
The following example illustrates how these values can be derived.
The experimental heat of formation of ethane is -20.03 kcal/mol and ethane consists of 2 P groups. Likewise propane (-25.02 kcal/mol) can be written as 2P+S, isobutane (-32.07) as 3P+T and neopentane (-40.18 kcal/mol) as 4P+Q. These four equations and 4 unknowns work out to estimations for P (-10.01 kcal/mol), S (-4.99 kcal/mol), T (-2.03 kcal/mol) and Q (-0.12 kcal/mol). Of course the accuracy will increase when the dataset increases.
the data allow the calculation of heat of formation for isomers. For example, the pentanes:
n-pentane = 2P + 3S = -35 (exp. -35 kcal/mol)
isopentane = 3P + S + T + 1 gauche correction = -36.6 (exp. -36.7 kcal/mol)
neopentane = 4P + Q = 40.1 (exp. 40.1 kcal/mol)
The group additivities for alkenes are:
Cd-(H2) +6.27
Cd-(C)(D) +8.55
Cd-(C)2 +10.19
Cd-(Cd)(H) +6.78
Cd-(Cd)(C) +8.76
C-(Cd)(H)3 -10.00
C-(Cd)(C)(H)2 -4.80
C-(Cd)(C)2(H) -1.67
C-(Cd)(C)3 +1.77
C-(Cd)2(H)2 -4.30
cis correction +1.10
alkene gauche correction +0.80
In alkenes the cis isomer is always less stable than the trans isomer by 1.10 kcal/mol.
More group additivity tables exist for a wide range of functional groups.
Gronert model
An alternative model has been developed by S. Gronert based not on breaking molecules into fragments but based on 1,2 and 1,3 interactions
The Gronert equation reads:
The pentanes are now calculated as:
n-pentane = 4CC + 12CH + 9HCH + 18HCC + 3CCC + (5C + 12H) = - 35.1 kcal/mol
isopentane = 4CC + 12CH + 10HCH + 16HCC + 4CCC + (5C + 12H) = - 36.7 kcal/mol
neopentane = 4CC + 12CH + 12HCH + 12HCC + 6CCC + (5C + 12H) = -40.1 kcal/mol
Key in this treatment is the introduction of 1,3-repulsive and destabilizing interactions and this type of steric hindrance should exist considering the molecular geometry of simple alkanes. In methane the distance between the hydrogen atoms is 1.8 angstrom but the combined van der Waals radii of hydrogen are 2.4 angstrom implying steric hindrance. Also in propane the methyl to methyl distance is 2.5 angstrom whereas the combined van der Waals radii are much larger (4 angstrom).
In the Gronert model these repulsive 1,3 interactions account for trends in bond dissociation energies which for example decrease going from methane to ethane to isopropane to neopentane. In this model the homolysis of a C-H bond releases strain energy in the alkane. In traditional bonding models the driving force is the ability of alkyl groups to donate electrons to the newly formed free radical carbon.
See also
Joback method
References
Thermochemistry
Thermodynamic models | Heat of formation group additivity | [
"Physics",
"Chemistry"
] | 1,109 | [
"Thermodynamic models",
"Thermochemistry",
"Thermodynamics"
] |
38,635,109 | https://en.wikipedia.org/wiki/Complexification%20%28Lie%20group%29 | In mathematics, the complexification or universal complexification of a real Lie group is given by a continuous homomorphism of the group into a complex Lie group with the universal property that every continuous homomorphism of the original group into another complex Lie group extends compatibly to a complex analytic homomorphism between the complex Lie groups. The complexification, which always exists, is unique up to unique isomorphism. Its Lie algebra is a quotient of the complexification of the Lie algebra of the original group. They are isomorphic if the original group has a quotient by a discrete normal subgroup which is linear.
For compact Lie groups, the complexification, sometimes called the Chevalley complexification after Claude Chevalley, can be defined as the group of complex characters of the Hopf algebra of representative functions, i.e. the matrix coefficients of finite-dimensional representations of the group. In any finite-dimensional faithful unitary representation of the compact group it can be realized concretely as a closed subgroup of the complex general linear group. It consists of operators with polar decomposition , where is a unitary operator in the compact group and is a skew-adjoint operator in its Lie algebra. In this case the complexification is a complex algebraic group and its Lie algebra is the complexification of the Lie algebra of the compact Lie group.
Universal complexification
Definition
If is a Lie group, a universal complexification is given by a complex Lie group and a continuous homomorphism with the universal property that, if is an arbitrary continuous homomorphism into a complex Lie group , then there is a unique complex analytic homomorphism such that .
Universal complexifications always exist and are unique up to a unique complex analytic isomorphism (preserving inclusion of the original group).
Existence
If is connected with Lie algebra , then its universal covering group is simply connected. Let be the simply connected complex Lie group with Lie algebra , let be the natural homomorphism (the unique morphism such that is the canonical inclusion) and suppose is the universal covering map, so that is the fundamental group of . We have the inclusion , which follows from the fact that the kernel of the adjoint representation of equals its centre, combined with the equality
which holds for any . Denoting by the smallest closed normal Lie subgroup of that contains , we must now also have the inclusion . We define the universal complexification of as
In particular, if is simply connected, its universal complexification is just .
The map is obtained by passing to the quotient. Since is a surjective submersion, smoothness of the map implies smoothness of .
For non-connected Lie groups with identity component and component group , the extension
induces an extension
and the complex Lie group is a complexification of .
Proof of the universal property
The map indeed possesses the universal property which appears in the above definition of complexification. The proof of this statement naturally follows from considering the following instructive diagram.
Here, is an arbitrary smooth homomorphism of Lie groups with a complex Lie group as the codomain.
For simplicity, we assume is connected. To establish the existence of , we first naturally extend the morphism of Lie algebras to the unique morphism of complex Lie algebras. Since is simply connected, Lie's second fundamental theorem now provides us with a unique complex analytic morphism between complex Lie groups, such that . We define as the map induced by , that is: for any . To show well-definedness of this map (i.e. ), consider the derivative of the map . For any , we have
,
which (by simple connectedness of ) implies . This equality finally implies , and since is a closed normal Lie subgroup of , we also have . Since is a complex analytic surjective submersion, the map is complex analytic since is. The desired equality is imminent.
To show uniqueness of , suppose that are two maps with . Composing with from the right and differentiating, we get , and since is the inclusion , we get . But is a submersion, so , thus connectedness of implies .
Uniqueness
The universal property implies that the universal complexification is unique up to complex analytic isomorphism.
Injectivity
If the original group is linear, so too is the universal complexification and the homomorphism between the two is an inclusion. give an example of a connected real Lie group for which the homomorphism is not injective even at the Lie algebra level: they take the product of by the universal covering group of and quotient out by the discrete cyclic subgroup generated by an irrational rotation in the first factor and a generator of the center in the second.
Basic examples
The following isomorphisms of complexifications of Lie groups with known Lie groups can be constructed directly from the general construction of the complexification.
The complexification of the special unitary group of 2x2 matrices is
.
This follows from the isomorphism of Lie algebras
,
together with the fact that is simply connected.
The complexification of the special linear group of 2x2 matrices is
.
This follows from the isomorphism of Lie algebras
,
together with the fact that is simply connected.
The complexification of the special orthogonal group of 3x3 matrices is
,
where denotes the proper orthochronous Lorentz group. This follows from the fact that is the universal (double) cover of , hence:
.
We also use the fact that is the universal (double) cover of .
The complexification of the proper orthochronous Lorentz group is
.
This follows from the same isomorphism of Lie algebras as in the second example, again using the universal (double) cover of the proper orthochronous Lorentz group.
The complexification of the special orthogonal group of 4x4 matrices is
.
This follows from the fact that is the universal (double) cover of , hence and so .
The last two examples show that Lie groups with isomorphic complexifications may not be isomorphic. Furthermore, the complexifications of Lie groups and show that complexification is not an idempotent operation, i.e. (this is also shown by complexifications of and ).
Chevalley complexification
Hopf algebra of matrix coefficients
If is a compact Lie group, the *-algebra of matrix coefficients of finite-dimensional unitary representations is a uniformly dense *-subalgebra of , the *-algebra of complex-valued continuous functions on . It is naturally a Hopf algebra with comultiplication given by
The characters of are the *-homomorphisms of into . They can be identified with the point evaluations for in and the comultiplication allows the group structure on to be recovered. The homomorphisms of into also form a group. It is a complex Lie group and can be identified with the complexification of . The *-algebra is generated by the matrix coefficients of any faithful representation of . It follows that defines a faithful complex analytic representation of .
Invariant theory
The original approach of to the complexification of a compact Lie group can be concisely stated within the language of classical invariant theory, described in . Let be a closed subgroup of the unitary group where is a finite-dimensional complex inner product space. Its Lie algebra consists of all skew-adjoint operators such that lies in for all real . Set with the trivial action of on the second summand. The group acts on , with an element acting as . The commutant (or centralizer algebra) is denoted by . It is generated as a *-algebra by its unitary operators and its commutant is the *-algebra spanned by the operators . The complexification of consists of all operators in such that commutes with and acts trivially on the second summand in . By definition it is a closed subgroup of . The defining relations (as a commutant) show that is an algebraic subgroup. Its intersection with coincides with , since it is a priori a larger compact group for which the irreducible representations stay irreducible and inequivalent when restricted to . Since is generated by unitaries, an invertible operator lies in if the unitary operator and positive operator in its polar decomposition both lie in . Thus lies in and the operator can be written uniquely as with a self-adjoint operator. By the functional calculus for polynomial functions it follows that lies in the commutant of if with in . In particular taking purely imaginary, must have the form with in the Lie algebra of . Since every finite-dimensional representation of occurs as a direct summand of , it is left invariant by and thus every finite-dimensional representation of extends uniquely to . The extension is compatible with the polar decomposition. Finally the polar decomposition implies that is a maximal compact subgroup of , since a strictly larger compact subgroup would contain all integer powers of a positive operator , a closed infinite discrete subgroup.
Decompositions in the Chevalley complexification
Cartan decomposition
The decomposition derived from the polar decomposition
where is the Lie algebra of , is called the Cartan decomposition of . The exponential factor is invariant under conjugation by but is not a subgroup. The complexification is invariant under taking adjoints, since consists of unitary operators and of positive operators.
Gauss decomposition
The Gauss decomposition is a generalization of the LU decomposition for the general linear group and a specialization of the Bruhat decomposition. For it states that with respect to a given orthonormal basis an element of can be factorized in the form
with lower unitriangular, upper unitriangular and diagonal if and only if all the principal minors of are non-vanishing. In this case and are uniquely determined.
In fact Gaussian elimination shows there is a unique such that is upper triangular.
The upper and lower unitriangular matrices, and , are closed unipotent subgroups of GL(V). Their Lie algebras consist of upper and lower strictly triangular matrices. The exponential mapping is a polynomial mapping from the Lie algebra to the corresponding subgroup by nilpotence. The inverse is given by the logarithm mapping which by unipotence is also a polynomial mapping. In particular there is a correspondence between closed connected subgroups of and subalgebras of their Lie algebras. The exponential map is onto in each case, since the polynomial function lies in a given Lie subalgebra if and do and are sufficiently small.
The Gauss decomposition can be extended to complexifications of other closed connected subgroups of by using the root decomposition to write the complexified Lie algebra as
where is the Lie algebra of a maximal torus of and are the direct sum of the corresponding positive and negative root spaces. In the weight space decomposition of as eigenspaces of acts as diagonally, acts as lowering operators and as raising operators. are nilpotent Lie algebras acting as nilpotent operators; they are each other's adjoints on . In particular acts by conjugation of , so that is a semidirect product of a nilpotent Lie algebra by an abelian Lie algebra.
By Engel's theorem, if is a semidirect product, with abelian and nilpotent, acting on a finite-dimensional vector space with operators in diagonalizable and operators in nilpotent, there is a vector that is an eigenvector for and is annihilated by . In fact it is enough to show there is a vector annihilated by , which follows by induction on , since the derived algebra annihilates a non-zero subspace of vectors on which and act with the same hypotheses.
Applying this argument repeatedly to shows that there is an orthonormal basis of consisting of eigenvectors of with acting as upper triangular matrices with zeros on the diagonal.
If and are the complex Lie groups corresponding to and , then the Gauss decomposition states that the subset
is a direct product and consists of the elements in for which the principal minors are non-vanishing. It is open and dense. Moreover, if denotes the maximal torus in ,
These results are an immediate consequence of the corresponding results for .
Bruhat decomposition
If denotes the Weyl group of and denotes the Borel subgroup , the Gauss decomposition is also a consequence of the more precise Bruhat decomposition
decomposing into a disjoint union of double cosets of . The complex dimension of a double coset is determined by the length of as an element of . The dimension is maximized at the Coxeter element and gives the unique open dense double coset. Its inverse conjugates into the Borel subgroup of lower triangular matrices in .
The Bruhat decomposition is easy to prove for . Let be the Borel subgroup of upper triangular matrices and the subgroup of diagonal matrices. So . For in , take in so that maximizes the number of zeros appearing at the beginning of its rows. Because a multiple of one row can be added to another, each row has a different number of zeros in it. Multiplying by a matrix in , it follows that lies in . For uniqueness, if , then the entries of vanish below the diagonal. So the product lies in , proving uniqueness.
showed that the expression of an element as becomes unique if is restricted to lie in the upper unitriangular subgroup . In fact, if , this follows from the identity
The group has a natural filtration by normal subgroups with zeros in the first superdiagonals and the successive quotients are Abelian. Defining and to be the intersections with , it follows by decreasing induction on that . Indeed, and are specified in by the vanishing of complementary entries on the th superdiagonal according to whether preserves the order or not.
The Bruhat decomposition for the other classical simple groups can be deduced from the above decomposition using the fact that they are fixed point subgroups of folding automorphisms of . For , let be the matrix with 's on the antidiagonal and 's elsewhere and set
Then is the fixed point subgroup of the involution . It leaves the subgroups and invariant. If the basis elements are indexed by , then the Weyl group of consists of satisfying
, i.e. commuting with . Analogues of and are defined by intersection with , i.e. as fixed points of . The uniqueness of the decomposition implies the Bruhat decomposition for .
The same argument works for . It can be realised as the fixed points of in where .
Iwasawa decomposition
The Iwasawa decomposition
gives a decomposition for for which, unlike the Cartan decomposition, the direct factor is a closed subgroup, but it is no longer invariant under conjugation by . It is the semidirect product of the nilpotent subgroup by the Abelian subgroup .
For and its complexification , this decomposition can be derived as a restatement of the Gram–Schmidt orthonormalization process.
In fact let be an orthonormal basis of and let be an element in . Applying the Gram–Schmidt process to , there is a unique orthonormal basis and positive constants such that
If is the unitary taking to , it follows that lies in the subgroup , where is the subgroup of positive diagonal matrices with respect to and is the subgroup of upper unitriangular matrices.
Using the notation for the Gauss decomposition, the subgroups in the Iwasawa decomposition for are defined by
Since the decomposition is direct for , it is enough to check that . From the properties of the Iwasawa decomposition for , the map is a diffeomorphism onto its image in , which is closed. On the other hand, the dimension of the image is the same as the dimension of , so it is also open. So because is connected.
gives a method for explicitly computing the elements in the decomposition. For in set . This is a positive self-adjoint operator so its principal minors do not vanish. By the Gauss decomposition, it can therefore be written uniquely in the form
with in , in and in . Since is self-adjoint, uniqueness forces . Since it is also positive must lie in and have the form for some unique in . Let be its unique square root in . Set and . Then is unitary, so is in , and .
Complex structures on homogeneous spaces
The Iwasawa decomposition can be used to describe complex structures on the s in complex projective space of highest weight vectors of finite-dimensional irreducible representations of . In particular the identification between and can be used to formulate the Borel–Weil theorem. It states that each irreducible representation
of can be obtained by holomorphic induction from a character of , or equivalently that it is realized in the space of sections of a holomorphic line bundle on .
The closed connected subgroups of containing are described by Borel–de Siebenthal theory. They are exactly the centralizers of tori . Since every torus is generated topologically by a single element , these are the same as centralizers of elements in . By a result of Hopf is always connected: indeed any element is along with contained in some maximal torus, necessarily contained in .
Given an irreducible finite-dimensional representation with highest weight vector of weight , the stabilizer of in is a closed subgroup . Since is an eigenvector of , contains . The complexification also acts on and the stabilizer is a closed complex subgroup containing . Since is annihilated by every raising operator corresponding to a positive root , contains the Borel subgroup . The vector is also a highest weight vector for the copy of corresponding to , so it is annihilated by the lowering operator generating if . The Lie algebra of is the direct sum of and root space vectors annihilating , so that
The Lie algebra of is given by . By the Iwasawa decomposition . Since fixes , the -orbit of in the complex projective space of coincides with the orbit and
In particular
Using the identification of the Lie algebra of with its dual, equals the centralizer of in , and hence is connected. The group is also connected. In fact the space is simply connected,
since it can be written as the quotient of the (compact) universal covering group of the compact semisimple group by a connected subgroup, where is the center of . If is the identity component of , has as a covering space, so that . The homogeneous space has a complex structure, because is a complex subgroup. The orbit in complex projective space is closed in the Zariski topology by Chow's theorem, so is a smooth projective variety. The Borel–Weil theorem and its generalizations are discussed in this context in , , and .
The parabolic subgroup can also be written as a union of double cosets of
where is the stabilizer of in the Weyl group . It is generated by the reflections corresponding to the simple roots orthogonal to .
Noncompact real forms
There are other closed subgroups of the complexification of a compact connected Lie group G which have the same complexified Lie algebra. These are the other real forms of GC.
Involutions of simply connected compact Lie groups
If G is a simply connected compact Lie group and σ is an automorphism of order 2, then the fixed point subgroup K = Gσ is automatically connected. (In fact this is true for any automorphism of G, as shown for inner automorphisms by Steinberg and in general by Borel.)
This can be seen most directly when the involution σ corresponds to a Hermitian symmetric space. In that case σ is inner and implemented by an element in a one-parameter subgroup exp tT contained in the center of Gσ. The innerness of σ implies that K contains a maximal torus of G, so has maximal rank. On the other hand, the centralizer of the subgroup generated by the torus S of elements exp tT is connected, since if x is any element in K there is a maximal torus containing x and S, which lies in the centralizer. On the other hand, it contains K since S is central in K and is contained in K since z lies in S. So K is the centralizer of S and hence connected. In particular K contains the center of G.
For a general involution σ, the connectedness of Gσ can be seen as follows.
The starting point is the Abelian version of the result: if T is a maximal torus of a simply connected group G and σ is an involution leaving invariant T and a choice of positive roots (or equivalently a Weyl chamber), then the fixed point subgroup Tσ is connected. In fact the kernel of the exponential map from onto T is a lattice Λ with a Z-basis indexed by simple roots, which σ permutes. Splitting up according to orbits, T can be written as a product of terms T on which σ acts trivially or terms T2 where σ interchanges the factors. The fixed point subgroup just corresponds to taking the diagonal subgroups in the second case, so is connected.
Now let x be any element fixed by σ, let S be a maximal torus in CG(x)σ and let T be the identity component of CG(x, S). Then T is a maximal torus in G containing x and S. It is invariant under σ and the identity component of Tσ is S. In fact since x and S commute, they are contained in a maximal torus which, because it is connected, must lie in T. By construction T is invariant under σ. The identity component of Tσ contains S, lies in CG(x)σ and centralizes S, so it equals S. But S is central in T, to T must be Abelian and hence a maximal torus. For σ acts as multiplication by −1 on the Lie algebra , so it and therefore also are Abelian.
The proof is completed by showing that σ preserves a Weyl chamber associated with T. For then Tσ is connected so must equal S. Hence x lies in S. Since x was arbitrary, Gσ must therefore be connected.
To produce a Weyl chamber invariant under σ, note that there is no root space on which both x and S acted trivially, for this would contradict the fact that CG(x, S) has the same Lie algebra as T. Hence there must be an element s in S such that t = xs acts non-trivially on each root space. In this case t is a regular element of T—the identity component of its centralizer in G equals T. There is a unique Weyl alcove A in such that t lies in exp A and 0 lies in the closure of A. Since t is fixed by σ, the alcove is left invariant by σ and hence so also is the Weyl chamber C containing it.
Conjugations on the complexification
Let G be a simply connected compact Lie group with complexification GC. The map c(g) = (g*)−1 defines an automorphism of GC as a real Lie group with G as fixed point subgroup. It is conjugate-linear on and satisfies c2 = id. Such automorphisms of either GC or are called conjugations.
Since GC is also simply connected any conjugation c1 on corresponds to a unique automorphism c1 of GC.
The classification of conjugations c0 reduces to that of involutions σ of G because
given a c1 there is an automorphism φ of the complex group GC such that
commutes with c. The conjugation c0 then leaves G invariant and restricts to an involutive automorphism σ. By simple connectivity the same is true at the level of Lie algebras. At the Lie algebra level c0 can be recovered from σ by the formula
for X, Y in .
To prove the existence of φ let ψ = c1c an automorphism of the complex group GC. On the Lie algebra level it defines a self-adjoint operator for the complex inner product
where B is the Killing form on . Thus ψ2 is a positive operator and an automorphism along with all its real powers. In particular take
It satisfies
Cartan decomposition in a real form
For the complexification GC, the Cartan decomposition is described above. Derived from the polar decomposition in the complex general linear group, it gives a diffeomorphism
On GC there is a conjugation operator c corresponding to G as well as an involution σ commuting with c. Let c0 = c σ and let G0 be the fixed point subgroup of c. It is closed in the matrix group GC and therefore a Lie group. The involution σ acts on both G and G0. For the Lie algebra of G there is a decomposition
into the +1 and −1 eigenspaces of σ. The fixed point subgroup K of σ in G is connected since G is simply connected. Its Lie algebra is the +1 eigenspace . The Lie algebra of G0 is given by
and the fixed point subgroup of σ is again K, so that G ∩ G0 = K. In G0, there is a Cartan decomposition
which is again a diffeomorphism onto the direct and corresponds to the polar decomposition of matrices.
It is the restriction of the decomposition on GC. The product gives a diffeomorphism onto a closed subset of G0. To check that it is surjective, for g in G0 write g = u ⋅ p with u in G and p in P. Since c0 g = g, uniqueness implies that σu = u and σp = p−1. Hence u lies in K and p in P0.
The Cartan decomposition in G0 shows that G0 is connected, simply connected and noncompact, because of the direct factor P0. Thus G0 is a noncompact real semisimple Lie group.
Moreover, given a maximal Abelian subalgebra in , A = exp is a toral subgroup such that σ(a) = a−1 on A; and any two such 's are conjugate by an element of K.
The properties of A can be shown directly. A is closed because the closure of A is a toral subgroup satisfying σ(a) = a−1, so its Lie algebra lies in and hence equals by maximality. A can be generated topologically by a single element exp X, so is the centralizer of X in . In the K-orbit of any element of there is an element Y such that (X,Ad k Y) is minimized at k = 1. Setting k = exp tT with T in , it follows that (X,[T,Y]) = 0 and hence [X,Y] = 0, so that Y must lie in . Thus is the union of the conjugates of . In particular some conjugate of X lies in any other choice of , which centralizes that conjugate; so by maximality the only possibilities are conjugates of .
A similar statements hold for the action of K on in . Morevoer, from the Cartan decomposition for G0, if A0 = exp , then
Iwasawa decomposition in a real form
See also
Real form (Lie theory)
Notes
References
Lie groups
Lie algebras
Algebraic groups
Representation theory | Complexification (Lie group) | [
"Mathematics"
] | 5,673 | [
"Lie groups",
"Mathematical structures",
"Fields of abstract algebra",
"Algebraic structures",
"Representation theory"
] |
38,635,240 | https://en.wikipedia.org/wiki/Virtual%20Extensible%20LAN | Virtual eXtensible LAN (VXLAN) is a network virtualization technology that uses a VLAN-like encapsulation technique to encapsulate OSI layer 2 Ethernet frames within layer 4 UDP datagrams, using 4789 as the default IANA-assigned destination UDP port number, although many implementations that predate the IANA assignment use port 8472. VXLAN attempts to address the scalability problems associated with large cloud computing deployments. VXLAN endpoints, which terminate VXLAN tunnels and may be either virtual or physical switch ports, are known as VXLAN tunnel endpoints (VTEPs).
History
VXLAN is an evolution of efforts to standardize on an overlay encapsulation protocol. Compared to single-tagged IEEE 802.1Q VLANs which provide a limited number of layer-2 VLANs (4094, using a 12-bit VLAN ID), VXLAN increases scalability up to about 16 million logical networks (using a 24-bit VNID) and allows for layer-2 adjacency across IP networks. Multicast or unicast with head-end replication (HER) is used to flood Broadcast, unknown-unicast and multicast traffic.
The VXLAN specification was originally created by VMware, Arista Networks and Cisco.
Implementations
VxLAN is widely, but not universally, implemented in commercial networking equipment. Several open-source implementations of VxLAN also exist.
Commercial
Arista, Cisco, and VMware were the originators of VxLAN and support it in various products.
Other backers of the VXLAN technology include Huawei, Broadcom, Citrix, Pica8, Big Switch Networks, Arrcus, Cumulus Networks, Dell EMC, Ericsson, Mellanox, Red Hat, Joyent, and Juniper Networks.
Open source
FreeBSD,
OpenBSD,
Open vSwitch is an example of a software-based virtual network switch that supports VXLAN overlay networks.
Standards specifications
VXLAN is officially documented by the IETF in RFC 7348. VXLAN encapsulates a MAC frame in a UDP datagram for transport across an IP network, creating an overlay network or tunnel.
Alternative technologies
Alternative technologies addressing the same or similar operational concerns, include:
IEEE 802.1ad ("Q-in-Q"), which greatly increases the number of VLANs supported by standard IEEE 802 Ethernet beyond 4K.
IEEE 802.1ah ("MAC-in-MAC"), which supports tunneling Ethernet in a way which greatly increases the number of VLANs supported while avoiding a large increase in the size of the MAC Address table in a Carrier Ethernet deployment.
Network Virtualization using Generic Route Encapsulation (NVGRE), which uses different framing but has similar goals to VxLAN.
See also
Distributed Overlay Virtual Ethernet (DOVE)
Ethernet VPN (EVPN)
GENEVE, an industry effort to unify both VXLAN and NVGRE technologies
Generic routing encapsulation (GRE)
IEEE 802.1ad, an Ethernet networking standard, also known as provider bridging, Stacked VLANs, or simply Q-in-Q.
IEEE 802.1ah, an IEEE Ethernet networking standard, also known as Provider Backbone Bridging (PBB) or MAC-in-MAC.
NVGRE, Network Virtualization using GRE, which is a similar competing specification to VxLAN.
Overlay Transport Virtualization (OTV)
Virtual LAN (VLAN)
Layer 2 Tunneling Protocol (L2TP)
References
External links
VXLAN Deep Dive: Part 1 and Part 2, November 2012, by Joe Onisick
Tunneling protocols | Virtual Extensible LAN | [
"Engineering"
] | 788 | [
"Computer networks engineering",
"Tunneling protocols"
] |
38,635,315 | https://en.wikipedia.org/wiki/Network%20Virtualization%20using%20Generic%20Routing%20Encapsulation | Network Virtualization using Generic Routing Encapsulation (NVGRE) is a network virtualization technology that attempts to alleviate the scalability problems associated with large cloud computing deployments. It uses Generic Routing Encapsulation (GRE) to tunnel layer 2 packets over layer 3 networks.
Its principal backer is Microsoft.
See also
Virtual Extensible LAN (VXLAN), a similar competing specification
Generic Networking Virtualization Encapsulation (GENEVE), an industry effort to unify both VXLAN and NVGRE technologies
Generic Routing Encapsulation, GRE for transporting L3 packets.
References
External links
NVGRE Overview, November 19, 2012, by Joe Onisick
Tunneling protocols | Network Virtualization using Generic Routing Encapsulation | [
"Technology",
"Engineering"
] | 149 | [
"Computing stubs",
"Computer networks engineering",
"Tunneling protocols",
"Computer network stubs"
] |
38,635,705 | https://en.wikipedia.org/wiki/Dijet%20event | In particle physics, a dijet event is a collision between subatomic particles that produces two particle jets.
Dijet events are measured at the LHC to constrain QCD models, in particular the parton evolution equations and parton distribution functions. This is accomplished by measuring the azimuthal correlations between the two jets.
References
Experimental particle physics | Dijet event | [
"Physics"
] | 75 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
38,636,326 | https://en.wikipedia.org/wiki/Coagulation%20testing | Blood clotting tests are the tests used for diagnostics of the hemostasis system.
Coagulometer is the medical laboratory analyzer used for testing of the hemostasis system. Modern coagulometers realize different methods of activation and observation of development of blood clots in blood or in blood plasma.
Classification of blood clotting tests
Substantially all coagulometers used in laboratory diagnostics are based on the methods of testing of the hemostasis system created more than fifty years ago. The majority of these methods are good to detect defects in one of the hemostasis components, without diagnosing other possible defects. Another problem of the actual hemostasis system diagnostics is the thrombosis prediction, i.e. sensitivity to the patient's prethrombotic state. All the diversity of clinical tests of the blood coagulation system can be divided into 2 groups: global (integral, general) tests, and «local» (specific) tests.
Global tests
Global tests, also known as global coagulation assays (GCAs), characterize the results of work of the whole clotting cascade. They suit to diagnose the general state of the blood coagulation system and the intensity of pathologies, and to simultaneously record all attendant influences. Global methods play the key role at the first stage of diagnostics: they provide an integral picture of alterations within the coagulation system and allow predicting a tendency to hyper- or hypo-coagulation in general.
Local tests
Local tests characterize the results of work of the separate components of the blood coagulation system cascade, as well as of the separate coagulation factors. They are essential for the possibility to specify the pathology localization within the accuracy of coagulation factor.
A D-dimer (product of thrombi degradation) test can be specified separately. The rise of D-dimers concentration in the patient's blood states the possibility of the completed thrombosis.
To obtain a complete picture of the work of hemostasis by a patient, the doctor should have a possibility to choose which test is necessary.
According to the type of the investigated object, the following complementary groups of methods can be specified:
Tests in platelet poor plasma or in platelet free plasma (convenient for transportation; can be frozen; possibility to use optical observation methods; but the thrombocyte component of the hemostasis is not taken into account),
Tests in platelet rich plasma (close to real conditions in the body, but restrictions as to the terms of work),
Tests in whole blood (the most adjusted to human physiology; the test can be started immediately; but the least convenient due to terms of blood storage and difficulties of the results' interpretation).
Specific global tests
Thromboelastography (TEG)
Investigation of the whole blood
No information about the thrombin formation kinetics, low separability of plasma and thrombocyte hemostasis contribution
Non-standardized
Low sensitivity
Thrombin generation assay (TGA) (thrombin potential, endogenous thrombin potential (ETP))
Possibility to use platelet poor plasma or platelet rich plasma
Information about the catalyst of the main reaction – transformation of fibrinogen into fibrin
Homogenous (activation in the whole sample volume)
ETP-based activated protein C resistance test (ETP-based APCR)
Thrombodynamics test
Non-homogenous: realization of the three-dimensional model of the clot growth
Use of platelet free plasma
Record of information about the clot formation as a diagram, giving the possibility to calculate the key parameters of the blood coagulation system
New test, not widely accepted
Overall hemostatic potential (OHP)
Specific local tests
Activated partial thromboplastin time (APTT or aPTT)
Characteristics of the velocity of passage of the intrinsic coagulation pathway
Poor plasma (the most convenient to work with, but no realization of the thrombocyte clotting mechanism)
Contact activation pathway
Prothrombin time test (or prothrombin test, INR, PT) – velocity of passage of the extrinsic blood coagulation pathway
Poor plasma
Not sensitive to deficiency of intrinsic coagulation pathway factors
Highly specialized methods to reveal the alteration in concentration of separate factors.
References
Blood tests | Coagulation testing | [
"Chemistry"
] | 903 | [
"Blood tests",
"Chemical pathology"
] |
38,636,545 | https://en.wikipedia.org/wiki/C13H26 | {{DISPLAYTITLE:C13H26}}
The molecular formula C13H26 (molar mass: 182.34 g/mol, exact mass: 182.2035 u) may refer to:
Cyclotridecane
Tridecene
Molecular formulas | C13H26 | [
"Physics",
"Chemistry"
] | 58 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
38,636,554 | https://en.wikipedia.org/wiki/C14H28 | {{DISPLAYTITLE:C14H28}}
The molecular formula C14H28 (molar mass: 196.37 g/mol, exact mass: 196.2191 u) may refer to:
Cyclotetradecane
Tetradecene
Molecular formulas | C14H28 | [
"Physics",
"Chemistry"
] | 60 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
6,890,306 | https://en.wikipedia.org/wiki/Master%20clock | A master clock is a precision clock that provides timing signals to synchronise slave clocks as part of a clock network. Networks of electric clocks connected by wires to a precision master pendulum clock began to be used in institutions like factories, offices, and schools around 1900. Modern radio clocks are synchronised by radio signals or Internet connections to a worldwide time system called Coordinated Universal Time (UTC), which is governed by primary reference atomic clocks in many countries.
A modern, atomic version of a master clock is the large clock ensemble found at the U.S. Naval Observatory.
History
Between the late 1800s and the availability of Internet time services, many large institutions that depended on accurate timekeeping such as schools, offices, railway networks, telephone exchanges, and factories used master/slave clock networks. These consisted of multiple slave clocks and other timing devices, connected through wires to a master clock which kept them synchronized by electrical signals. The master clock was usually a precision pendulum clock with a seconds pendulum and a robust mechanism. It generated periodic timing signals by electrical contacts attached to the mechanism, transmitted to the controlled equipment through pairs of wires. The controlled devices could be wall clocks, tower clocks, factory sirens, school bells, time card punches, and paper tape programmers that ran factory machines. Thousands of such systems were installed in industrial countries and enabled the precise scheduling that industrial economies depended on.
In early networks, the slave clocks had their own timekeeping mechanism and were just corrected by the signals from the master clock every hour, 6, 12, or 24 hours. In later networks, the slave clocks were simply counters that used a stepper motor to advance the hands with each pulse from the master clock, once per second or once per minute. Some types, such as the Synchronome, had optional extra mechanisms to compare the time of the clock with a national time service that distributed time signals from astronomical regulator clocks in a country's naval observatory by telegraph wire. An example is the GPO time service in Britain which distributed signals from the Greenwich Observatory.
The British Post Office (GPO) used such master clocks in their electromechanical telephone exchanges to generate the call timing pulses necessary to charge telephone subscribers for their calls, and to control sequences of events such as the forcible clearing of connections where the calling subscriber failed to hang up after the called subscriber had done so. The UK had four such manufacturers, all of whom made clocks to the same GPO specification and which used the Hipp Toggle impulse system; these were Gent and Co., of Leicester, Magneta Ltd of Leatherhead in Surrey, Synchronome Ltd of Alperton, north-west London, and Gillett and Johnson.
See also
Shortt–Synchronome clock
Pendulum clock
Escapement
References
External links
All about electric master and slave clocks
Examples of Master Clock Systems
GPO clock systems
Telecommunications equipment
Clocks | Master clock | [
"Physics",
"Technology",
"Engineering"
] | 590 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
6,892,615 | https://en.wikipedia.org/wiki/Acoustic%20tag | Acoustic tags are small sound-emitting devices that allow the detection and/or remote tracking of organisms in aquatic ecosystems. Acoustic tags are commonly used to monitor the behavior of fish. Studies can be conducted in lakes, rivers, tributaries, estuaries or at sea. Acoustic tag technology allows researchers to obtain locational data of tagged fish: depending on tag and receiver array configurations, researchers can receive simple presence/absence data, 2D positional data, or even 3D fish tracks in real-time with sub-meter resolutions.
Acoustic tags allow researchers to:
Conduct Survival Studies
Monitor Migration/Passage/Trajectory
Track Behavior in Two or Three Dimensions (2D or 3D)
Measure Bypass Effectiveness at Dams and other Passages
Observe Predator/Prey Dynamics
Serveil movement and activity
Sampling
Acoustic Tags transmit a signal made up of acoustic pulses or "pings" that sends location information about the tagged organism to the hydrophone receiver. By tying the received acoustic signature to the known type of programmed signal code, the specific tagged individual is identified. The transmitted signal can propagate up to 1 km (in freshwater). Receivers can be actively held by a researcher ("Active Tracking") or affixed to specific locations ("Passive Tracking"). Arrays of receivers can allow the triangulation of tagged individuals over many kilometers. Acoustic tags can have very long battery life - some tags last up to four years .
Tags
Acoustic Tags are produced in many different shapes and sizes depending on the type of species being studied, or the type of environment in which the study is conducted. Sound parameters such as frequency and modulation method are chosen for optimal detectability, and signal level. For oceanic environments, frequencies less than 100 kHz range are often used, while frequencies of several hundreds of kilohertz are more common in for studies in rivers and lakes.
A typical Acoustic Tag consists of a piezoceramic transducer, drive/timing electronics, and a battery power source . Cylindrical or “tube” transducers are often used, which have metalization on the inner and outer walls of the structure. In normal operation, an alternating current (AC) electrical signal generated by the drive/timing electronics is impressed across the two metalization layers. This voltage creates stress in the material, which in turn cause the transducer to emit an acoustic signal or “ping”, which emanates outward from the surface of the tube. An acoustic “ping” can be detected by specialized receivers, and processed using advanced signal processing techniques to determine if a fish swimming into the reception area carries a specific acoustic tag.
Acoustic Tags are distinguished from other types of devices such as radio tags, or passive inductive transponder (PIT) tags, in that they can work in either salt or freshwater (RF and PIT tags perform poorly in saltwater) and do not depend on steering the fish in a particular path (PIT tags require the fish to be routed through a restricted sensing area).
Several different types of methods are used to attach the tag to an organism. In fish, tags are frequently embedded into the individual by cutting a small incision in the abdominal cavity of the fish (surgical implantation), or put down the gullet to embed the Acoustic Tag in the stomach (gastric implantation). External attachment using adhesive compounds is typically not used for fish as scale fluids do not allow for any successful attachment to scale tissue. In other organisms tags are attached with heavy duty glues.
Receivers
By determining the sound's time of arrival at each hydrophone, the 3D position of the fish can be calculated. The hydrophone receiver picks up the sound signal and converts it to data that researchers use to plot the resulting tag positions in three dimensions, in real-time. Using a post processing software, such as MarkTags, takes that data and delivers the result, the 3D track.
Applications
Rivers
Dams
At present, acoustic tags are most commonly used to monitor fish approaching diversion and guidance structures at hydropower dams. This allows hydropowered dam facilities, public utility districts, and municipalities to evaluate specific migration pathways used by the fish (most often salmon smolts), identify where fish mortality occurs and assess fish behavior in relation to hydrodynamic conditions and/or any other environmental parameters. Ultimately, working to improve bypass effectiveness and protect fish populations, Acoustic Tag Tracking Systems are a significant breakthrough in the preservation of migrating salmon populations. For an example of Acoustic Tag Tracking Systems at work on the Columbia River, see Grant County's most recent application or Chelan County's most recent application.
Acoustic tags have been employed to help public utility agencies, private firms, and state and federal agencies meet fisheries regulations as defined by the Federal Regulations and Oversight of Energy known as FERC.
Lakes
Ocean
Nearshore Ecosystem
Offshore Ecosystem
See also
Animal migration tracking
Data storage tag
GIS and aquatic science
Pop-up satellite archival tag
References
A Multiple-Release Model to Estimate Route-Specific and Dam Passage Survival at a Hydroelectric Project
Movement and Habitat Use of Chinook Salmon Smolts, Northern Pikeminnow, and Smallmouth Bass Near the SR 520 Bridge
Basin-wide Monitoring of Salmon Smolts at US Dams
Categorising Salmon Migration Behaviour Using Characteristics of Split-beam Acoustic Data
Basin-Wide Monitoring of Acoustically Tagged Salmon Smolts at Hydropower Dams in the Mid-Columbia River Basin, USA
Correcting Bias in Survival Estimation Resulting From Tag Failure in Acoustic and Radiotelemetry Studies
Comparison of Acoustic and PIT Tagged Juvenile Chinook, Steelhead and Sockeye Salmon (Oncorhynchus, spp.) Passing Dams on the Columbia River, USA
A Method for Estimating the "Position Accuracy" of Fish Tags
Monitoring the Three-Dimensional Behavior of Acoustically Tagged Salmon Approaching Hydropower Dams in the Pacific Northwest
Monitoring the Behavior of Acoustically Tagged Chinook and Steelhead Smolts Approaching Rocky Reach Dam on the Columbia River
Additional Publications
Ichthyology
Fisheries science
Sound
Acoustics | Acoustic tag | [
"Physics"
] | 1,200 | [
"Classical mechanics",
"Acoustics"
] |
6,893,544 | https://en.wikipedia.org/wiki/Folding%20funnel | The folding funnel hypothesis is a specific version of the energy landscape theory of protein folding, which assumes that a protein's native state corresponds to its free energy minimum under the solution conditions usually encountered in cells. Although energy landscapes may be "rough", with many non-native local minima in which partially folded proteins can become trapped, the folding funnel hypothesis assumes that the native state is a deep free energy minimum with steep walls, corresponding to a single well-defined tertiary structure. The term was introduced by Ken A. Dill in a 1987 article discussing the stabilities of globular proteins.
The folding funnel hypothesis is closely related to the hydrophobic collapse hypothesis, under which the driving force for protein folding is the stabilization associated with the sequestration of hydrophobic amino acid side chains in the interior of the folded protein. This allows the water solvent to maximize its entropy, lowering the total free energy. On the side of the protein, free energy is further lowered by favorable energetic contacts: isolation of electrostatically charged side chains on the solvent-accessible protein surface and neutralization of salt bridges within the protein's core. The molten globule state predicted by the folding funnel theory as an ensemble of folding intermediates thus corresponds to a protein in which hydrophobic collapse has occurred but many native contacts, or close residue-residue interactions represented in the native state, have yet to form.
In the canonical depiction of the folding funnel, the depth of the well represents the energetic stabilization of the native state versus the denatured state, and the width of the well represents the conformational entropy of the system. The surface outside the well is shown as relatively flat to represent the heterogeneity of the random coil state. The theory's name derives from an analogy between the shape of the well and a physical funnel, in which dispersed liquid is concentrated into a single narrow area.
Background
The protein folding problem is concerned with three questions, as stated by Ken A. Dill and Justin L. MacCallum: (i) How can an amino acid sequence determine the 3D native structure of a protein? (ii) How can a protein fold so quickly despite a vast number of possible conformations (the Levinthal's Paradox)? How does the protein know what conformations not to search? And (iii) is it possible to create a computer algorithm to predict a protein's native structure based on its amino acid sequence alone? Auxiliary factors inside the living cell such as folding catalysts and chaperones assist in the folding process but do not determine the native structure of a protein. Studies during the 1980s focused on models that could explain the shape of the energy landscape, a mathematical function that describes the free energy of a protein as a function of the microscopic degrees of freedom.
After introducing the term in 1987, Ken A. Dill surveyed the polymer theory in protein folding, in which it addresses two puzzles, the first one being the Blind Watchmaker's Paradox in which biological proteins could not originate from random sequences, and the second one being Levinthal's Paradox that protein folding cannot happen randomly. Dill pulled the idea from the Blind Watchmaker into his metaphor for protein folding kinetics. The native state of protein can be achieved through a folding process involving some small bias and random choices to speed up the search time. That would mean even residues at very different positions in the amino acid sequence will be able to come into contact with each other. Yet, a bias during the folding process can change the folding time by tens to hundreds of orders of magnitude.
As protein folding process goes through a stochastic search of conformations before reaching its final destination, the vast number of possible conformations is considered irrelevant, while the kinetic traps begin to play a role. The stochastic idea of protein intermediate conformations reveals the concept of an “energy landscape” or "folding funnel" in which folding properties are related to free energy and that the accessible conformations of a protein are reduced as it approaches native-like structure. The y-axis of the funnel represents the "internal free energy" of a protein: the sum of hydrogen bonds, ion-pairs, torsion angle energies, hydrophobic and solvation free energies. The many x-axes represent the conformational structures, and those that are geometrically similar to each other are close to one another in the energy landscape. The folding funnel theory is also supported by Peter G Wolynes, Zaida Luthey-Schulten and Jose Onuchic, that folding kinetics should be considered as progressive organization of partially folded structures into an ensemble (a funnel), rather than a serial linear pathway of intermediates.
Native states of proteins are shown to be thermodynamically stable structures that exist in physiological conditions, and are proven in experiments with ribonuclease by Christian B. Anfinsen (see Anfinsen's dogma). It is suggested that because the landscape is encoded by the amino-acid sequence, natural selection has enabled proteins to evolve so that they are able to fold rapidly and efficiently. In a native low-energy structure, there's no competition among conflicting energy contributions, leading to a minimal frustration. This notion of frustration is further measured quantitatively in spin glasses, in which the folding transition temperature Tf is compared to the glass transition temperature Tg. Tf represents the native interactions in the folded structure and Tg represents the strength of non-native interactions in other configurations. A high Tf/Tg ratio indicates a faster folding rate in a protein and fewer intermediates compared to others. In a system with high frustration, mild difference in thermodynamic condition can lead to different kinetic traps and landscape ruggedness.
Proposed funnel models
Funnel-shaped energy landscape
Ken A. Dill and Hue Sun Chan (1997) illustrated a folding pathway design based on Levinthal's Paradox, named the "golf-course" landscape, where a random searching for the native states would prove impossible, due to the hypothetically "flat playing field" since the protein "ball" would take a really long time to find a fall into the native "hole". However, a rugged pathway deviated from the initial smooth golf-course creates a directed tunnel where the denatured protein goes through to reach its native structure, and there can exist valleys (intermediate states) or hills (transition states) long the pathway to a protein's native state. Yet, this proposed pathway yields a contrast between pathway dependence versus pathway independence, or the Levinthal dichotomy and emphasizes the one-dimensional route of conformation.
Another approach to protein folding eliminates the term "pathway" and replaces with "funnels" where it is concerned with parallel processes, ensembles and multiple dimensions instead of a sequence of structures a protein has to go through. Thus, an ideal funnel consists of a smooth multi-dimensional energy landscape where increasing interchain contacts correlate with decreasing degree of freedom and ultimately achievement of native state.
Unlike an idealized smooth funnel, a rugged funnel demonstrates kinetic traps, energy barriers, and some narrow throughway paths to native state. This also explains an accumulation of misfolded intermediates where kinetic traps prevent protein intermediates from achieving their final conformation. For those that are stuck in this trap, they would have to break away favorable contacts that do not lead to their native state before reaching their original starting point and find another different search downhill. A Moat landscape, on the other hand, illustrates the idea of a variation of routes including an obligatory kinetic trap route that protein chains take to reach their native state. This energy landscape stems from a study by Christopher Dobson and his colleagues about hen egg white lysozyme, in which half of its population undergo normal fast folding, while the other half first forms α-helices domain quickly then β-sheet one slowly. It is different from the rugged landscape since there are no accidental kinetic traps but purposeful ones required for portions of protein to go through before reaching the final state. Both the rugged landscape and the Moat landscape nonetheless present the same concept in which protein configurations might come across kinetic traps during their folding process. On the other hand, the Champagne Glass landscape involves free energy barriers due to conformational entropy that partly resembles the random golf-course pathway in which a protein chain configuration is lost and has to spend time searching for the path downhill. This situation can be applied to a conformational search of polar residues that will eventually connect two hydrophobic clusters.
The foldon volcano-shaped funnel model
In another study, Rollins and Dill (2014) introduces the Foldon Funnel Model, a new addition to previous folding funnels, in which secondary structures form sequentially along the folding pathway and are stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape instead of a simple funnel that is mentioned previously, in which the outer landscape is sloped uphill because protein secondary structures are unstable. These secondary structures are then stabilized by tertiary interactions, which, despite their increasingly native-like structures, are also increasing in free energy until the second-to-last to the last step that is downhill in free energy. The highest free energy on the volcano landscape is at the step with structure just before the native state. This prediction of energy landscape is consistent with experiments showing that most protein secondary structures are unstable on their own and with measured protein equilibrium cooperativities. Thus, all earlier steps before reaching the native state are in pre-equilibrium. Despite its model being different from other models before, the Foldon Funnel Model still divides conformational space into the two kinetic states: native versus all others.
Application
Folding funnel theory has both qualitative and quantitative application. Visualization of funnels creates a communicating tool between statistical mechanical properties of proteins and their folding kinetics. It suggests the stability of folding process, which would be hard to destroy by mutation given maintained stability. To be more specific, a mutation can occur that leads to blockage of a routes to native state, but another route can take over provided that it reaches the final structure.
A protein's stability increases as it approaches its native state through the partially folded configuration. Local structures such as helices and turns happen first followed by global assembly. Despite a process of trial and error, protein folding can be fast because proteins reach its native structure by this divide-and-conquer, local-to-global process. The idea of folding funnel helps rationalize the purpose of chaperones, in which the re-folding process of a protein can be catalyzed by chaperones pulling it apart and bringing it to a high energy landscape and let it fold again in a random fashion of trials and errors. Funneled landscapes suggest that different individual molecules of the same protein sequence may utilize microscopically different routes to reach the same destination. Some paths will be more populated than others.
Funnels distinguish the basics between folding and simple classical chemical reactions analogy. A chemical reaction starts from its reactant A and goes through a change in structure to reach its product B. On the other hand, folding is a transition from disorder to order, not only from structure to structure. Simple one-dimensional reaction pathway does not capture protein folding's reduction in conformational degeneracy. In other words, folding funnels provide a microscopic framework for folding kinetics. Folding kinetics is described by simple mass action models, D-I-N (on-path intermediate I between denatured D and native N) or X-D-N (off-path intermediate X), and is referred to as the macroscopic framework of folding. Sequential Micropath view represents the mass-action model and explains folding kinetics in terms of pathways, transition states, on and off-path intermediates and what one sees in experiments, and is not concerned with the activity of a molecule or the state of a monomer sequence at a specific macroscopic transition state. Its problem is related to Levinthal's Paradox, or the searching problem. In contrast, funnel models aim to explain the kinetics in terms of underlying physical forces, to predict the microstate composition of those macrostates.
Nonetheless, it proves challenging for computer simulations (energy landscape) to reconcile the "macroscopic" view of mass-action models with "microscopic" understanding of the changes in protein conformation during the folding process. Insights from funnels are insufficient to improve computer search methods. A smooth and funnel-shaped landscape on global scale can appear rough on local scale in computer simulations.
See also
Chaperone – proteins that assist other proteins with folding or unfolding
Levinthal paradox
Protein structure prediction
References
Further reading
Biochemical reactions
Protein structure | Folding funnel | [
"Chemistry",
"Biology"
] | 2,580 | [
"Biochemistry",
"Protein structure",
"Structural biology",
"Biochemical reactions"
] |
6,894,253 | https://en.wikipedia.org/wiki/Passenger%20leukocyte | In tissue and organ transplantation, the passenger leukocyte theory is the proposition that leucocytes within a transplanted allograft sensitize the recipient's alloreactive T-lymphocytes, causing transplant rejection.
The concept was first proposed by George Davis Snell and the term coined in 1968 when Elkins and Guttmann showed that leukocytes present in a donor graft initiate an immune response in the recipient of a transplant.
See also
History of immunology
References
Further reading
Immunology
Organ transplantation | Passenger leukocyte | [
"Biology"
] | 113 | [
"Immunology"
] |
6,894,506 | https://en.wikipedia.org/wiki/Hydrophobic%20collapse | Hydrophobic collapse is a proposed process for the production of the 3-D conformation adopted by polypeptides and other molecules in polar solvents. The theory states that the nascent polypeptide forms initial secondary structure (ɑ-helices and β-strands) creating localized regions of predominantly hydrophobic residues. The polypeptide interacts with water, thus placing thermodynamic pressures on these regions which then aggregate or "collapse" into a tertiary conformation with a hydrophobic core. Incidentally, polar residues interact favourably with water, thus the solvent-facing surface of the peptide is usually composed of predominantly hydrophilic regions.
Hydrophobic collapse may also reduce the affinity of conformationally flexible drugs to their protein targets by reducing the net hydrophobic contribution to binding by self association of different parts of the drug while in solution. Conversely rigid scaffolds (also called privileged structures) that resist hydrophobic collapse may enhance drug affinity.
Partial hydrophobic collapse is an experimentally accepted model for the folding kinetics of many globular proteins, such as myoglobin, alpha-lactalbumin, barstar, and staphylococcal nuclease. However, because experimental evidence of early folding events is difficult to obtain, hydrophobic collapse is often studied in silico via molecular dynamics and Monte Carlo simulations of the folding process. Globular proteins that are thought to fold by hydrophobic collapse are particularly amenable to complementary computational and experimental study using phi value analysis.
Biological significance
Correct protein folding is integral to proper functionality within biological systems. Hydrophobic collapse is one of the main events necessary for reaching a protein's stable and functional conformation. Proteins perform extremely specific functions which are dependent on their structure. Proteins that do not fold correctly are nonfunctional and contribute nothing to a biological system.
Hydrophobic aggregation can also occur between unrelated polypeptides. If two locally hydrophobic regions of two unrelated structures are left near each other in aqueous solution, aggregation will occur. In this case, this can have drastic effects on the health of the organism. The formation of amyloid fibrils, insoluble aggregates of hydrophobic protein can lead to a myriad of diseases including Parkinson's and Alzheimer's disease.
Energetics
The driving force behind protein folding is not well understood, hydrophobic collapse is a theory, one of many, that is thought to influence how a nascent polypeptide will fold into its native state. Hydrophobic collapse can be visualized as part of the folding funnel model which leads a protein to its lowest kinetically accessible energy state. In this model, we do not consider the interactions of the peptide backbone as this maintains its stability in non-polar and polar environments as long as there is sufficient hydrogen bonding within the backbone, thus we will only consider the thermodynamic contributions of the side chains to protein stability.
When placed in a polar solvent, polar side chains can form weak intermolecular interactions with the solvent, specifically hydrogen bonding. The solvent is able to maintain hydrogen bonding with itself as well as the polypeptide. This maintains the stability of the structure within localized segments of the protein. However, non-polar side chains are unable to participate in hydrogen bonding interactions. The inability of the solvent to interact with these side chains leads to a decrease in entropy of the system. The solvent can interact with itself, however the portion of the molecule in proximity to the non-polar side chain is unable to form any significant interactions, thus the dissociative degrees of freedom available to the molecule decreases and entropy decreases. By aggregating the hydrophobic regions, the solvent can reduce the surface area exposed to non-polar side chains, thus reduce localized areas of decreased entropy. While the entropy of the polypeptide has decreased as it enters a more ordered state, the overall entropy of the system increases, contributing to the thermodynamic favourability of a folded polypeptide.
As can be seen in the folding funnel diagram, the polypeptide is at its highest energy state when unfolded in aqueous solution. As it forms localized folding intermediates, or molten globules, the energy of the system decreases. The polypeptide will continue folding into lower energy states as long as these conformations are kinetically accessible. In this case, a native conformation does not have to be at the lowest energy trough of the diagram as shown, it must simply exist in its natural and kinetically accessible conformation in biological systems.
Surface structures
The formation of a hydrophobic core requires the surface structures of this aggregate to maintain contact with both the polar solvent as well as the internal structures. In order to do this, these surface structures usually contain amphipathic properties. A surface exposed alpha helix may have nonpolar residues in an N+3, N+4 position, allowing the alpha-helix to express nonpolar properties on one side when split longitudinally along the axis. Note, in the diagram, the presence of non-polar(gold) amino acids along one side of the helix when viewed through the longitudinal axis, as well as charged/polar amino acids along the other face. This provides this structure with longitudinal amphipathic properties necessary for hydrophobic aggregation along the non-polar side. Similarly, beta strands can also adopt this property with simple alternation of polar and nonpolar residues. Every N+1 side chain will occupy space on the opposite side of the beta strand.
References
Protein structure | Hydrophobic collapse | [
"Chemistry"
] | 1,133 | [
"Protein structure",
"Structural biology"
] |
6,895,389 | https://en.wikipedia.org/wiki/Urocanic%20acid | Urocanic acid (formally trans-Urocanic acid) is an intermediate in the catabolism of L-histidine. The cis-urocanic acid isomer is rare.
Metabolism
It is formed from L-histidine through the action of histidine ammonialyase (also known as histidase or histidinase) by elimination of ammonium.
In the liver, urocanic acid is transformed by urocanate hydratase (or urocanase) to 4-imidazolone-5-propionic acid and subsequently to glutamic acid.
Clinical significance
Inherited deficiency of urocanase leads to elevated levels of urocanic acid in the urine, a condition known as urocanic aciduria.
An important role for the onset of atopic dermatitis and asthma has been attributed to filaggrin, a skin precursor of urocanic acid.
Urocanic acid is thought to be a significant attractant of the nematode parasite Strongyloides stercoralis, in part because of relatively high levels in the plantar surfaces of the feet, the site through which this parasite often enters the body.
Function
Urocanic acid was detected in animal sweat and skin where, among other possible functions, it acts as an endogenous sunscreen or photoprotectant against UVB-induced DNA damage. Urocanic acid is found predominantly in the stratum corneum of the skin and it is likely that most of it is derived from filaggrin catabolism (a histidine-rich protein). When exposed to UVB irradiation, trans-urocanic acid is converted in vitro and in vivo to cis-urocanic acid (cis-UCA). The cis form is known to activate regulatory T cells.
Some studies attribute filaggrin an important role in keeping the skin surface slightly acidic, through a breaking down mechanism to form histidine and subsequently trans-urocanic acid, however others have shown that the filaggrin–histidine–urocanic acid cascade is not essential for skin acidification.
History
Urocanic acid was first isolated in 1874 by the chemist Max Jaffé from the urine of a dog, hence the name ( = urine, and canis = dog).
See also
Histidinemia
Inborn error of metabolism
References
External links
The Online Metabolic and Molecular Bases of Inherited Disease - Chapter 80 - An overview of disorders of histidine metabolism, including urocanic aciduria.
Imidazoles
Carboxylic acids
Alkene derivatives | Urocanic acid | [
"Chemistry"
] | 536 | [
"Carboxylic acids",
"Functional groups"
] |
33,153,479 | https://en.wikipedia.org/wiki/Flow%20waveform | The Flow waveform for the human respiratory system in lung ventilators, is the shape of air flow that is blown into the patient's airways. Computer technology allows the practitioner to select particular flow patterns, along with volume and pressure settings, in order to achieve the best patient outcomes and reduce complications experienced while on a mechanical ventilator.
Description
Modern lung ventilators are able to generate three basic wave forms of flow: squared waveform, descending waveform, and sinusoidal waveform. A square waveform pattern is found on most mechanical ventilators, old and new, and achieves a constant flow.
During the inspiration phase, the flow rate rises to a predetermined level and remains constant, thus giving the appearance of a square wave form. This produces the shortest inspiratory time compared to other flow patterns. A decelerating flow waveform pattern, also known as descending ramp, achieves the highest level of flow at the start of a breath, when patient flow demand is often greatest.
See also
Artificial ventilation
Respiratory therapy
List of ventilator manufacturers
References
Fluid dynamics | Flow waveform | [
"Chemistry",
"Engineering"
] | 225 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
33,160,241 | https://en.wikipedia.org/wiki/Model%20Driven%20Interoperability | Model Driven Interoperability (MDI) is a methodological framework, which provides a conceptual and technical support to make interoperable enterprises using ontologies and semantic annotations, following model driven development (MDD) principles.
Overview
The initial idea of works on MDI, was the application of model-driven methods and techniques for solving interoperability problems from business level down to data level.
The three main ideas of Model Driven Interoperability (MDI) approach are:
Interoperability should be achieved at different levels: Business, Knowledge, Application and Data.
The main idea is to follow a Model Driven Engineering (MDE) approach. Therefore, it is promoted a systematic use of models as primary engineering artefacts throughout the engineering life cycle combined with both Domain Specific Modelling Languages and transformation engines and generators.
The use of ontologies and semantic annotations is needed in order to perform model transformation from enterprise level to code level.
History
MDI was initiated in 2004 with the beginning of two important research projects:
INTEROP NoE (Interoperability Research for Networked Enterprises Applications and Software Network of Excellence, FP6-IST 508011).
ATHENA IP (Advanced Technologies for interoperability of Heterogeneous Enterprise Networks and their Applications Integrated Project) (FP6-IST-507849).
Both projects supported by the European Commission. These two projects worked on both the definition of a methodological framework and the application of MDI on concrete cases.
MDI Topics
MDI Framework (INTEROP NoE)
MDI Framework within INTEROP is defined:
From conceptual point of view: providing a Reference Model in which is proposed an Interoperability Model defined at different levels of abstraction.
From methodological point of view: providing the Model Driven Interoperability (MDI) Method as a method (principle and structure) to enable interoperable Enterprise Software Applications (ESA), starting from the level of the Enterprise Model rather than from the code level and using a model-driven approach, combined with use of ontologies and semantic annotations.
From technological point of view: providing vertical and horizontal semantic support in order to perform model transformations.
The Reference Model
The Reference Model proposed for the MDI approach shows the different kinds of models that it is possible to perform at different levels of abstraction, and the successive model transformations that are needed to carry out.
The different levels of abstraction are needed in order to make possible model transformations reducing the gap existing between enterprise models and code level. The definition of the several levels was based on the Model Driven Architecture (MDA) that defines three levels of abstraction: CIM, PIM and PSM. Moreover, we introduced a partition of the CIM level into two sub-levels in order to reduce the gap between the CIM and PIM levels. An Interoperability Model has been also defined at the different levels of abstraction proposed above.
One example of this Reference Model for MDI can be seen in the next figure. This picture shows in each of the proposed levels the different kind of models that can be performed (GRAI at Top CIM level, and UML in the other levels), and the final objective of making interoperable two ESA, the franchisor's ERP and the franchisee's CRM.
Model Driven Interoperability Method
Model Driven Interoperability Method (MDI Method) is a model-driven method that can be used for two enterprises that need to interoperate not only at the code level but also at Enterprise Modelling level with an ontological support with the final aim of improving their performances.
It uses model transformations to achieve interoperability defining models and an Interoperability Model at different levels of abstraction according to an MDA approach and dividing the CIM level into two sub-levels, that is to say, Top CIM level (TCIM) and Bottom CIM level (BCIM).
It uses a Common Ontology to support these transformations and to solve interoperability problems at the semantic level.
The MDI Method proposed to solve interoperability problems, like its name indicates, is based on the MDA approach. Also, the following principles were applied to the definition of this method:
The MDI Method is organised as an iterative process like Unified Process (UP) and other Object-Oriented Processes.
The MDI Method also proposes semantic support like Semantic of Business Vocabulary and Business Rules (SBVR).
Next picture show the main features of the MDI Method, in which the green areas give the estimated effort related to each phase and workflow:
Its main phases, represented on the columns: they describe four phases corresponding to the passage from one level of abstraction to a lower one.
Its main workflows, especially the three process workflows related to the three main components of the MDI method: the Interoperability Model, the Common Interoperability Ontology and the Model Transformation.
MDI Framework (ATHENA IP)
The MDI Framework from ATHENA provides guidance on how MDD should be applied to address interoperability. The framework is structured in three main integration areas:
Conceptual integration, which focuses on concepts, metamodels, languages and model relationships. It provides us with a foundation for systematising various aspects of software model interoperability.
Technical integration, which focuses on the software development and execution environments. It provides us with development tools for developing software models and execution platforms for executing software models.
Applicative integration, which focuses on methodologies, standards and domain models. It provides us with guidelines, principles and patterns that can be used to solve software interoperability issues.
Conceptual integration
The reference model for conceptual integration has been developed from a MDD point of view focusing on the enterprise applications and software system.
According to MDA, a Computation Independent Model (CIM) corresponds to a view defined by a computation independent viewpoint. It describes the business context and business requirements for the software system(s). A Platform Independent Model (PIM) corresponds to a view defined by a platform independent viewpoint. It describes software specifications independent of execution platforms. A Platform Specific Model (PSM) corresponds to a view defined by a platform specific viewpoint. It describes the realisation of software systems.
Technical integration
Technical Integration reference model promotes the use of service-oriented solutions where a software system and more generally a system provide a set of services required by the businesses and users of the enterprise.
Applicative integration
The reference model for applicative integration has been developed in order to emphasise the dependencies between the different models and views to achieve interoperability.
Model Transformations
Model transformation is one of the key approaches used to support the MDI Method. This approach is used in both horizontal and vertical dimension of the Reference Model for MDI. All model transformations performed are based on to the generic transformation architecture.
Semantic Support
The following services: verification of the consistency of models, support to automatic mapping discovery among heterogeneous models, and definition of semantic preserving transformation can support MDI to tackle both vertical and horizontal issues.
Vertical issues: semantic support aiming at:
Giving a logic-based formalization of portions of models via semantic annotations easing reuse, cross-reference, and unambiguous terminology.
Tracing the changes (among the different layers of MDD transformations).
Formalizing the delta-knowledge used in semantic enriching transformations (i.e. the transformations from more abstract models to more detailed ones).
Horizontal issues: semantic support aiming at:
Performing semantic mismatches analysis among the models of different enterprises.
Representing model correspondences across enterprises through semantic annotations.
Creating reconciliation rules for performing data, service and business process reconciliation.
See also
Enterprise Integration
Enterprise Modelling
Enterprise Modelling Language
Interoperability
Architecture of Interoperable Information Systems
Metamodeling
Model Driven Integration
Model Driven Development
Model Driven Engineering
Model-driven architecture
Model Transformation
Mapping Languages
Enterprise Ontology
Semantic Annotation
References
External links
INTEROP-VLab
Interoperability | Model Driven Interoperability | [
"Engineering"
] | 1,621 | [
"Telecommunications engineering",
"Interoperability"
] |
3,895,745 | https://en.wikipedia.org/wiki/Smart%20meter | A smart meter is an electronic device that records information—such as consumption of electric energy, voltage levels, current, and power factor—and communicates the information to the consumer and electricity suppliers. Advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier.
Description
The term smart meter often refers to an electricity meter, but it also may mean a device measuring natural gas, water or district heating consumption. More generally, a smart meter is an electronic device that records information such as consumption of electric energy, voltage levels, current, and power factor. Smart meters communicate the information to the consumer for greater clarity of consumption behavior, and electricity suppliers for system monitoring and customer billing. Smart meters typically record energy near real-time, and report regularly, in short intervals throughout the day. Smart meters enable two-way communication between the meter and the central system. Smart meters may be part of a smart grid, but do not themselves constitute a smart grid.
Such an advanced metering infrastructure (AMI) differs from automatic meter reading (AMR) in that it enables two-way communication between the meter and the supplier. Communications from the meter to the network may be wireless, or via fixed wired connections such as power line carrier (PLC). Wireless communication options in common use include cellular communications, Wi-Fi (readily available), wireless ad hoc networks over Wi-Fi, wireless mesh networks, low power long-range wireless (LoRa), Wize (high radio penetration rate, open, using the frequency 169 MHz) Zigbee (low power, low data rate wireless), and Wi-SUN (Smart Utility Networks).
Similar meters, usually referred to as interval or time-of-use meters, have existed for years, but smart meters usually involve real-time or near real-time sensors, power outage notification, and power quality monitoring. These additional features are more than simple automated meter reading (AMR). They are similar in many respects to Advanced Metering Infrastructure (AMI) meters. Interval and time-of-use meters historically have been installed to measure commercial and industrial customers, but may not have automatic reading. Research by the UK consumer group Which?, showed that as many as one in three confuse smart meters with energy monitors, also known as in-home display monitors.
History
In 1972, Theodore Paraskevakos, while working with Boeing in Huntsville, Alabama, developed a sensor monitoring system that used digital transmission for security, fire, and medical alarm systems as well as meter reading capabilities. This technology was a spin-off from the automatic telephone line identification system, now known as Caller ID.
In 1974, Paraskevakos was awarded a U.S. patent for this technology. In 1977, he launched Metretek, Inc., which developed and produced the first smart meters. Since this system was developed pre-Internet, Metretek utilized the IBM series 1 mini-computer. For this approach, Paraskevakos and Metretek were awarded multiple patents.
The installed base of smart meters in Europe at the end of 2008 was about 39 million units, according to analyst firm Berg Insight. Globally, Pike Research found that smart meter shipments were 17.4 million units for the first quarter of 2011. Visiongain determined that the value of the global smart meter market would reach US$7 billion in 2012.
H.M. Zahid Iqbal, M. Waseem, and Dr. Tahir Mahmood, researchers of University of Engineering & Technology Taxila, Pakistan, introduced the concept of Smart Energy Meters in 2013. Their article, "Automatic Energy Meter Reading using Smart Energy Meter" outlined the key features of Smart Energy Meter including Automatic remote meter reading via GSM for utility companies and customers, Real-time monitoring of a customer's running load, Remote disconnection and reconnection of customer connections by the utility company and Convenient billing, eliminating the need of meter readers to physically visit the customers for billing.
over 99 million electricity meters were deployed across the European Union, with an estimated 24 million more to be installed by the end of 2020. The European Commission DG Energy estimates the 2020 installed base to have required €18.8 billion in investment, growing to €40.7 billion by 2030, with a total deployment of 266 million smart meters.
By the end of 2018, the U.S. had over 86 million smart meters installed. In 2017, there were 665 million smart meters installed globally. Revenue generation is expected to grow from $12.8 billion in 2017 to $20 billion by 2022.
Purpose
Since the inception of electricity deregulation and market-driven pricing throughout the world, utilities have been looking for a means to match consumption with generation. Non-smart electrical and gas meters only measure total consumption, providing no information of when the energy was consumed. Smart meters provide a way of measuring electricity consumption in near real-time. This allows utility companies to charge different prices for consumption according to the time of day and the season. It also facilitates more accurate cash-flow models for utilities. Since smart meters can be read remotely, labor costs are reduced for utilities.
Smart metering offers potential benefits to customers. These include, a) an end to estimated bills, which are a major source of complaints for many customers b) a tool to help consumers better manage their energy purchases—smart meters with a display outside their homes could provide up-to-date information on gas and electricity consumption and in doing so help people to manage their energy use and reduce their energy bills. With regards to consumption reduction, this is critical for understanding the benefits of smart meters because the relatively small percentage benefits in terms of savings are multiplied by millions of users. Smart meters for water consumption can also provide detailed and timely information about customer water use and early notification of possible water leaks in their premises. Electricity pricing usually peaks at certain predictable times of the day and the season. In particular, if generation is constrained, prices can rise if power from other jurisdictions or more costly generation is brought online. Proponents assert that billing customers at a higher rate for peak times encourages consumers to adjust their consumption habits to be more responsive to market prices and assert further, that regulatory and market design agencies hope these "price signals" could delay the construction of additional generation or at least the purchase of energy from higher-priced sources, thereby controlling the steady and rapid increase of electricity prices.
An academic study based on existing trials showed that homeowners' electricity consumption on average is reduced by approximately 3-5% when provided with real-time feedback.
Another advantage of smart meters that benefits both customers and the utility is the monitoring capability they provide for the whole electrical system. As part of an AMI, utilities can use the real-time data from smart meters measurements related to current, voltage, and power factor to detect system disruptions more quickly, allowing immediate corrective action to minimize customer impact such as blackouts. Smart meters also help utilities understand the power grid needs with more granularity than legacy meters. This greater understanding facilitates system planning to meet customer energy needs while reducing the likelihood of additional infrastructure investments, which eliminates unnecessary spending or energy cost increases.
Though the task of meeting national electricity demand with accurate supply is becoming ever more challenging as intermittent renewable generation sources make up a greater proportion of the energy mix, the real-time data provided by smart meters allow grid operators to integrate renewable energy onto the grid in order to balance the networks. As a result, smart meters are considered an essential technology to the decarbonisation of the energy system.
Advanced metering infrastructure
Advanced metering infrastructure (AMI) refers to systems that measure, collect, and analyze energy usage, and communicate with metering devices such as electricity meters, gas meters, heat meters, and water meters, either on request or on a schedule. These systems include hardware, software, communications, consumer energy displays and controllers, customer associated systems, meter data management software, and supplier business systems.
Government agencies and utilities are turning toward advanced metering infrastructure (AMI) systems as part of larger "smart grid" initiatives. AMI extends automatic meter reading (AMR) technology by providing two-way meter communications, allowing commands to be sent toward the home for multiple purposes, including time-based pricing information, demand-response actions, or remote service disconnects. Wireless technologies are critical elements of the neighborhood network, aggregating a mesh configuration of up to thousands of meters for back haul to the utility's IT headquarters.
The network between the measurement devices and business systems allows the collection and distribution of information to customers, suppliers, utility companies, and service providers. This enables these businesses to participate in demand response services. Consumers can use the information provided by the system to change their normal consumption patterns to take advantage of lower prices. Pricing can be used to curb the growth of peak demand consumption. AMI differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter. Systems only capable of meter readings do not qualify as AMI systems.
AMI implementation relies on four key components: Physical Layer Connectivity, which establishes connections between smart meters and networks, Communication Protocols to ensure secure and efficient data transmission, Server Infrastructure, which consists of centralized or distributed servers to store, process, and manage data for billing, monitoring, and demand response; and Data Analysis, where analytical tools provide insights, load forecasting, and anomaly detection for optimized energy management. Together, these components help utilities and consumers monitor and manage energy use efficiently, supporting smarter grid management.
Physical Layer Connectivity
Communication is a cornerstone of smart meter technology, enabling reliable and secure data transmission to central systems. However, the diversity of environments in which smart meters operate presents significant challenges. Solutions to these challenges encompass a range of communication methods including Power-line communication (PLC), Cellular network, Wireless mesh network, Short-range, and satellite:
Power-line communication for Smart Metering
Power Line Communication (PLC) stands out among smart metering connectivity technologies because it leverages existing electrical power infrastructure for data transmission. Unlike cellular, radio-frequency (RF), or Wi-Fi-based solutions, PLC does not require building or maintaining separate communication networks, making it inherently more cost-effective and easier to scale. Two major PLC standards in smart metering are G3-PLC and the PRIME Alliance protocol. G3-PLC supports IPv6-based communications and adaptive data rates, providing robust performance even in noisy environments, while PRIME (PoweRline Intelligent Metering Evolution) focuses on efficient, high-speed communication with low-cost implementation. PLC-based smart metering is deployed extensively in regions like Europe, South America, and parts of Asia where dense infrastructure supports its use. Utilities favor PLC for its reliability in urban environments and for connecting large numbers of meters within smart grid networks.
An important feature of G3-PLC and PRIME is their ability to enable mesh networking (also called multi-hop), where smart meters act as repeaters for other meters in the network. This functionality allows meters to relay data from neighboring meters to ensure that the information reaches the Data Concentrator Unit (DCU), even if direct communication is not possible due to distance or signal obstructions. This approach enhances network reliability and coverage, particularly in dense urban environments or geographically challenging areas.
Cellular Network (GPRS, NB-IoT, LTE-M): "Cellular technologies are highly scalable and secure. With national coverage, cellular connectivity can support a large number of meters in densely populated areas as well as reach those in remote locations."
Wireless mesh network (e.g. Wirepas and Wi-Sun): Ideal for urban areas, where devices can relay data to optimize coverage and reliability. It is mostly used for Water Meter and Gas Meter
Short-range: such as Wireless M-Bus (WMBUS) are commonly used in smart metering applications to enable reliable, low-power communication between utility meters and local data collectors within buildings or neighborhoods.
Hybrid PLC/RF PRIME and G3-PLC standards defines an integrated approach for seamless integration of PLC and wireless communication, enhancing reliability and flexibility in smart grids.
Additional options, such as Wi-Fi and internet-based networks, are also in use. However, no single communication solution is universally optimal. The challenges faced by rural utilities differ significantly from those of urban counterparts or utilities in remote, mountainous, or poorly serviced areas.
Smart meters often extend their functionality through integration into Home Area Networks (HANs). These networks enable communication within the household and may include:
In-Premises Displays: Providing real-time energy usage insights for consumers.
Hubs: Interfacing multiple meters with the central head-end system.
Technologies used in HANs vary globally but typically include PLC, wireless ad hoc networks, and Zigbee. By leveraging appropriate connectivity solutions, smart meters can address diverse environmental and infrastructural needs while delivering seamless communication and enhanced functionality.
Smart meters used as a gateway for water and gas meters
Electricity smart meters start to be utilized as gateways for gas and water meters, creating integrated smart metering systems. In this configuration, gas and water meters communicate with the electricity meter using Wireless M-Bus (Wireless Meter-Bus), a European standard (EN 13757-4) designed for secure and efficient data transmission between utility meters and data collectors. The electricity meter then aggregates this data and transmits it to the central utility network via Power Line Communication (PLC), which leverages existing electrical wiring for data transfer.
Communication Protocols
Smart meter communication protocols are essential for enabling reliable, efficient, and secure data exchange between meters, utilities, and other components of advanced metering infrastructure (AMI). These protocols address the diverse requirements of global markets, supporting various communication methods, from optical ports and serial connections to power line communication (PLC) and wireless networks. Below is an overview of key protocols, including ANSI standards widely used in North America, IEC protocols prevalent in Europe, the globally recognized OSGP for smart grid applications, and the PLC-focused Meters and More, each designed to meet specific needs in energy monitoring and management.
IEC 62056
"IEC 62056 is the most widely adopted protocol" for smart meter communication, enabling reliable, two-way data exchange within Advanced Metering Infrastructure (AMI) systems. It encompasses the DLMS/COSEM protocol for structuring and managing metering data. "It is widely used because of its flexibility, scalability, and ability to support different communication media such as Power Line Communication (PLC), TCP/IP, and wireless networks.". It also supports data transmission over serial connections using ASCII or binary formats, with physical media options such as modulated light (via LED and photodiode) or wired connections (typically EIA-485).
ANSI C12.18
ANSI C12.18 is an ANSI Standard that describes a protocol used for two-way communications with a meter, mostly used in North American markets. The C12.18 Standard is written specifically for meter communications via an ANSI Type 2 Optical Port, and specifies lower-level protocol details. ANSI C12.19 specifies the data tables that are used. ANSI C12.21 is an extension of C12.18 written for modem instead of optical communications, so it is better suited to automatic meter reading. ANSI C12.22 is the communication protocol for remote communications.
OSGP
The Open Smart Grid Protocol (OSGP) is a family of specifications published by the European Telecommunications Standards Institute (ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart metering and smart grid applications. Millions of smart meters based on OSGP are deployed worldwide. On July 15, 2015, the OSGP Alliance announced the release of a new security protocol (OSGP-AES-128-PSK) and its availability from OSGP vendors. This deprecated the original OSGP-RC4-PSK security protocol which had been identified to be vulnerable.
Meters and More
"Meters and More was created in 2010 from the coordinated work between Enel and Endesa to adopt, maintain and evolve the field-proven Meters and More open communication protocol for smart grid solutions." . In 2010, the Meters and More Association was established to promote the protocol globally, ensuring interoperability and efficiency in power line communication (PLC)-based smart metering systems. Meters and More is an open communication protocol designed for advanced metering infrastructure (AMI). It facilitates reliable, high-speed data exchange over PLC networks, focusing on energy monitoring, demand response, and secure two-way communication between utilities and consumers.
Unlike DLMS/COSEM, which is a globally standardized and versatile protocol supporting multiple utilities (electricity, gas, and water), Meters and More is tailored specifically for PLC-based systems, emphasizing efficiency, reliability, and ease of deployment in electricity metering.
There is a growing trend toward the use of TCP/IP technology as a common communication platform for Smart Meter applications, so that utilities can deploy multiple communication systems, while using IP technology as a common management platform. A universal metering interface would allow for development and mass production of smart meters and smart grid devices prior to the communication standards being set, and then for the relevant communication modules to be easily added or switched when they are. This would lower the risk of investing in the wrong standard as well as permit a single product to be used globally even if regional communication standards vary.
Server Infrastructure for Smart Meter AMI
In Advanced Metering Infrastructure (AMI), the server infrastructure is crucial for managing, storing, and processing the large volumes of data generated by smart meters. This infrastructure ensures seamless communication between smart meters, utility providers, and end-users, supporting real-time monitoring, billing, and grid management.
Key Components of AMI Server Infrastructure
Data Concentrator
A Data Concentrator Unit (DCU) aggregates data from multiple smart meters within a localized area (e.g., a neighborhood or building) before transmitting it to the central server. Data concentrators reduce the communication load on the network and help overcome connectivity challenges by acting as intermediaries between smart meters and the head-end system (HES). They typically support communication protocols like IEC 62056, DLMS/COSEM
Head-End System (HES)
The HES is responsible for collecting, validating, and managing data received from data concentrators and smart meters. It serves as the central communication hub, facilitating two-way communication between the smart meters and the utility's central servers. The HES supports meter configuration, firmware updates, and real-time data retrieval, ensuring data integrity and security.
Meter Data Management System (MDMS)
The MDMS is a specialized software platform that stores and processes large volumes of meter data collected by the HES. Key functions of the MDMS include data validation, estimation, and editing, as well as billing preparation, load analysis, and anomaly detection. The MDMS integrates with other utility systems, such as billing, customer relationship management (CRM), and demand response systems, to enable efficient energy management.
Data Analytics
Data analytics for smart meters leverages machine learning to extract insights from energy consumption data. Key applications include demand forecasting, dynamic pricing, Energy Disaggregation, and fault detection, enabling optimized grid performance and personalized energy management. These techniques drive efficiency, cost savings, and sustainability in modern energy systems.
"Energy Disaggregation, or the breakdown of your energy use based on specific appliances or devices", is an exploratory technique for analyzing energy consumption in households, commercial buildings, and industrial settings. By using data from a single energy meter, it employs algorithms and machine learning to estimate individual appliance usage without separate monitors. Known as Non-Intrusive Load Monitoring (NILM), this emerging method offers insights into energy efficiency, helping users optimize usage and reduce costs. While promising, energy disaggregation is still being refined for accuracy and scalability as part of smart energy management innovations.
Data management
The other critical technology for smart meter systems is the information technology at the utility that integrates the Smart Meter networks with utility applications, such as billing and CIS. This includes the Meter Data Management system.
It also is essential for smart grid implementations that power line communication (PLC) technologies used within the home over a Home Area Network (HAN), are standardized and compatible. The HAN allows HVAC systems and other household appliances to communicate with the smart meter, and from there to the utility. Currently there are several broadband or narrowband standards in place, or being developed, that are not yet compatible. To address this issue, the National Institute for Standards and Technology (NIST) established the PAP15 group, which studies and recommends coexistence mechanisms with a focus on the harmonization of PLC Standards for the HAN. The objective of the group is to ensure that all PLC technologies selected for the HAN coexist as a minimum. The two leading broadband PLC technologies selected are the HomePlug AV / IEEE 1901 and ITU-T G.hn technologies. Technical working groups within these organizations are working to develop appropriate coexistence mechanisms. The HomePlug Powerline Alliance has developed a new standard for smart grid HAN communications called the HomePlug Green PHY specification. It is interoperable and coexistent with the widely deployed HomePlug AV technology and with the latest IEEE 1901 global Standard and is based on Broadband OFDM technology. ITU-T commissioned in 2010 a new project called G.hnem, to address the home networking aspects of energy management, built upon existing Low Frequency Narrowband OFDM technologies.
Opposition and concerns
Some groups have expressed concerns regarding the cost, health, fire risk, security and privacy effects of smart meters and the remote controllable "kill switch" that is included with most of them. Many of these concerns regard wireless-only smart meters with no home energy monitoring or control or safety features. Metering-only solutions, while popular with utilities because they fit existing business models and have cheap up-front capital costs, often result in such "backlash". Often the entire smart grid and smart building concept is discredited in part by confusion about the difference between home control and home area network technology and AMI. The (now former) attorney general of Connecticut has stated that he does not believe smart meters provide any financial benefit to consumers, however, the cost of the installation of the new system is absorbed by those customers.
Security
Smart meters expose the power grid to cyberattacks that could lead to power outages, both by cutting off people's electricity and by overloading the grid. However many cyber security experts state that smart meters of UK and Germany have relatively high cybersecurity and that any such attack there would thus require extraordinarily high efforts or financial resources. The EU Cyber security Act took effect in June 2019, which includes Directive on Security Network and Information Systems establishing notification and security requirements for operators of essential services.
Through the Smartgrid Cybersecurity Committee, the U.S. Department of Energy published cybersecurity guidelines for grid operators in 2010 and updated them in 2014. The guidelines "...present an analytical framework that organizations can use to develop effective cybersecurity strategies..."
Implementing security protocols that protect these devices from malicious attacks has been problematic, due to their limited computational resources and long operational life.
The current version of IEC 62056 includes the possibility to encrypt, authenticate, or sign the meter data.
One proposed smart meter data verification method involves analyzing the network traffic in real-time to detect anomalies using an Intrusion Detection System (IDS). By identifying exploits as they are being leveraged by attackers, an IDS mitigates the suppliers' risks of energy theft by consumers and denial-of-service attacks by hackers. Energy utilities must choose between a centralized IDS, embedded IDS, or dedicated IDS depending on the individual needs of the utility. Researchers have found that for a typical advanced metering infrastructure, the centralized IDS architecture is superior in terms of cost efficiency and security gains.
In the United Kingdom, the Data Communication Company, which transports the commands from the supplier to the smart meter, performs an additional anomaly check on commands issued (and signed) by the energy supplier.
As Smart Meter devices are Intelligent Measurement Devices which periodically record the measured values and send the data encrypted to the Service Provider, therefore in Switzerland these devices need to be evaluated by an evaluation Laboratory, and need to be certified by METAS from 01.01.2020 according to Prüfmethodologie (Test Methodology for Execution of Data Security Evaluation of Swiss Smart Metering Components).
According to a report published by Brian Krebs, in 2009 a Puerto Rico electricity supplier asked the FBI to investigate large-scale thefts of electricity related to its smart meters. The FBI found that former employees of the power company and the company that made the meters were being paid by consumers to reprogram the devices to show incorrect results, as well as teaching people how to do it themselves. Several hacking tools that allow security researchers and penetration testers verify the security of electric utility smart meters have been released so far.
Health
Most health concerns about the meters arise from the pulsed radiofrequency (RF) radiation emitted by wireless smart meters.
Members of the California State Assembly asked the California Council on Science and Technology (CCST) to study the issue of potential health impacts from smart meters, in particular whether current FCC standards are protective of public health. The CCST report in April 2011 found no health impacts, based both on lack of scientific evidence of harmful effects from radio frequency (RF) waves and that the RF exposure of people in their homes to smart meters is likely to be minuscule compared to RF exposure to cell phones and microwave ovens. Daniel Hirsch, retired director of the Program on Environmental and Nuclear Policy at UC Santa Cruz, criticized the CCST report on the grounds that it did not consider studies that suggest the potential for non-thermal health effects such as latent cancers from RF exposure. Hirsch also stated that the CCST report failed to correct errors in its comparison to cell phones and microwave ovens and that, when these errors are corrected, smart meters "may produce cumulative whole-body exposures far higher than that of cell phones or microwave ovens."
The Federal Communications Commission (FCC) has adopted recommended Permissible Exposure Limit (PEL) for all RF transmitters (including smart meters) operating at frequencies of 300 kHz to 100 GHz. These limits, based on field strength and power density, are below the levels of RF radiation that are hazardous to human health.
Other studies substantiate the finding of the California Council on Science and Technology (CCST). In 2011, the Electric Power Research Institute performed a study to gauge human exposure to smart meters as compared to the FCC PEL. The report found that most smart meters only transmit RF signals 1% of the time or less. At this rate, and at a distance of 1 foot from the meter, RF exposure would be at a rate of 0.14% of the FCC PEL.
An indirect potential for harm to health by smart meters is that they enable energy companies to disconnect consumers remotely, typically in response to difficulties with payment. This can cause health problems to vulnerable people in financial difficulty; in addition to denial of heat, lighting, and use of appliances, there are people who depend on power to use medical equipment essential for life. While there may be legal protections in place to protect the vulnerable, many people in the UK were disconnected in violation of the rules.
Safety
Issues surrounding smart meters causing fires have been reported, particularly involving the manufacturer Sensus. In 2012. PECO Energy Company replaced the Sensus meters it had deployed in the Philadelphia, US region after reports that a number of the units had overheated and caused fires. In July 2014, SaskPower, the province-run utility company of the Canadian province of Saskatchewan, halted its roll-out of Sensus meters after similar, isolated incidents were discovered. Shortly afterward, Portland General Electric announced that it would replace 70,000 smart meters that had been deployed in the state of Oregon after similar reports. The company noted that it had been aware of the issues since at least 2013, and they were limited to specific models it had installed between 2010 and 2012. On July 30, 2014, after a total of eight recent fire incidents involving the meters, SaskPower was ordered by the Government of Saskatchewan to immediately end its smart meter program, and remove the 105,000 smart meters it had installed.
Privacy concerns
One technical reason for privacy concerns is that these meters send detailed information about how much electricity is being used each time. More frequent reports provide more detailed information. Infrequent reports may be of little benefit for the provider, as it doesn't allow as good demand management in the response of changing needs for electricity. On the other hand, widespread reports would allow the utility company to infer behavioral patterns for the occupants of a house, such as when the members of the household are probably asleep or absent. Furthermore, the fine-grained information collected by smart meters raises growing concerns of privacy invasion due to personal behavior exposure (private activity, daily routine, etc.). Current trends are to increase the frequency of reports. A solution that benefits both provider and user privacy would be to adapt the interval dynamically. Another solution involves energy storage installed at the household used to reshape the energy consumption profile. In British Columbia the electric utility is government-owned and as such must comply with privacy laws that prevent the sale of data collected by smart meters; many parts of the world are serviced by private companies that are able to sell their data. In Australia debt collectors can make use of the data to know when people are at home. Used as evidence in a court case in Austin, Texas, police agencies secretly collected smart meter power usage data from thousands of residences to determine which used more power than "typical" to identify marijuana growing operations.
Smart meter power data usage patterns can reveal much more than how much power is being used. Research has demonstrated that smart meters sampling power levels at two-second intervals can reliably identify when different electrical devices are in use.
Ross Anderson wrote about privacy concerns "It is not necessary for my meter to tell the power company, let alone the government, how much I used in every half-hour period last month"; that meters can provide "targeting information for burglars"; that detailed energy usage history can help energy companies to sell users exploitative contracts; and that there may be "a temptation for policymakers to use smart metering data to target any needed power cuts."
Opt-out options
Reviews of smart meter programs, moratoriums, delays, and "opt-out" programs are some responses to the concerns of customers and government officials. In response to residents who did not want a smart meter, in June 2012 a utility in Hawaii changed its smart meter program to "opt out". The utility said that once the smart grid installation project is nearing completion, KIUC may convert the deferral policy to an opt-out policy or program and may charge a fee to those members to cover the costs of servicing the traditional meters. Any fee would require approval from the Hawaii Public Utilities Commission.
After receiving numerous complaints about health, hacking, and privacy concerns with the wireless digital devices, the Public Utility Commission of the US state of Maine voted to allow customers to opt-out of the meter change at the cost of $12 a month. In Connecticut, another US state to consider smart metering, regulators declined a request by the state's largest utility, Connecticut Light & Power, to install 1.2 million of the devices, arguing that the potential savings in electric bills do not justify the cost. CL&P already offers its customers time-based rates. The state's Attorney General George Jepsen was quoted as saying the proposal would cause customers to spend upwards of $500 million on meters and get few benefits in return, a claim that Connecticut Light & Power disputed.
Abuse of dynamic pricing
Smart meters allow dynamic pricing; it has been pointed out that, while this allows prices to be reduced at times of low demand, it can also be used to increase prices at peak times if all consumers have smart meters. Additionally smart meters allow energy suppliers to switch customers to expensive prepay tariffs instantly in case of difficulties paying. In the UK during a period of very high energy prices from 2022, companies were remotely switching smart meters from a credit tariff to an expensive prepay tariff which disconnects supplies unless credit has been purchased. While regulations do not permit this without appropriate precautions to help those in financial difficulties and to protect the vulnerable, the rules were often flouted. (Prepaid tariffs could also be levied without smart meters, but this required a dedicated prepay meter to be installed.) In 2022, 3.2 million people were left without power at some point after running out of prepay credit.
Limited benefits
There are questions about whether electricity is or should be primarily a "when you need it" service where the inconvenience/cost-benefit ratio of time-shifting of loads is poor. In the Chicago area, Commonwealth Edison ran a test installing smart meters on 8,000 randomly selected households together with variable rates and rebates to encourage cutting back during peak usage. In Crain's Chicago Business article "Smart grid test underwhelms. In the pilot, few power down to save money.", it was reported that fewer than 9% exhibited any amount of peak usage reduction and that the overall amount of reduction was "statistically insignificant". This was from a report by the Electric Power Research Institute, a utility industry think tank who conducted the study and prepared the report. Susan Satter, senior assistant Illinois attorney general for public utilities said "It's devastating to their plan......The report shows zero statistically different result compared to business as usual."
By 2016, the 7 million smart meters in Texas had not persuaded many people to check their energy data as the process was too complicated.
A report from a parliamentary group in the UK suggests people who have smart meters installed are expected to save an average of £11 annually on their energy bills, much less than originally hoped. The 2016 cost-benefit analysis was updated in 2019 and estimated a similar average saving.
The Australian Victorian Auditor-General found in 2015 that 'Victoria's electricity consumers will have paid an estimated $2.239 billion for metering services, including the rollout and connection of smart meters. In contrast, while a few benefits have accrued to consumers, benefits realisation is behind schedule and most benefits are yet to be realised'
Erratic demand
Smart meters can allow real-time pricing, and in theory this could help smooth power consumption as consumers adjust their demand in response to price changes. However, modelling by researchers at the University of Bremen suggests that in certain circumstances, "power demand fluctuations are not dampened but amplified instead."
In the media
In 2013, Take Back Your Power, an independent Canadian documentary directed by Josh del Sol was released describing "dirty electricity" and the aforementioned issues with smart meters. The film explores the various contexts of the health, legal, and economic concerns. It features narration from the mayor of Peterborough, Ontario, Daryl Bennett, as well as American researcher De-Kun Li, journalist Blake Levitt, and Dr. Sam Milham. It won a Leo Award for best feature-length documentary and the Annual Humanitarian Award from Indie Fest the following year.
UK roll-out criticism
In a 2011 submission to the Public Accounts Committee, Ross Anderson wrote that Ofgem was "making all the classic mistakes which have been known for years to lead to public-sector IT project failures" and that the "most critical part of the project—how smart meters will talk to domestic appliances to facilitate demand response—is essentially ignored."
Citizens Advice said in August 2018 that 80% of people with smart meters were happy with them. Still, it had 3,000 calls in 2017 about problems. These related to first-generation smart meters losing their functionality, aggressive sales practices, and still having to send smart meter readings.
Ross Anderson of the Foundation for Information Policy Research has criticised the UK's program on the grounds that it is unlikely to lower energy consumption, is rushed and expensive, and does not promote metering competition. Anderson writes, "the proposed architecture ensures continued dominance of metering by energy industry incumbents whose financial interests are in selling more energy rather than less," and urged ministers "to kill the project and instead promote competition in domestic energy metering, as the Germans do – and as the UK already has in industrial metering. Every consumer should have the right to appoint the meter operator of their choice."
The high number of SMETS1 meters installed has been criticized by Peter Earl, head of energy at the price comparison website comparethemarket.com. He said, "The Government expected there would only be a small number of the first-generation of smart meters before Smets II came in, but the reality is there are now at least five million and perhaps as many as 10 million Smets I meters."
UK smart meters in southern England and the Midlands use the mobile phone network to communicate, so they do not work correctly when phone coverage is weak. A solution has been proposed, but was not operational as of March 2017.
In March 2018 the National Audit Office (NAO), which watches over public spending, opened an investigation into the smart meter program, which had cost £11bn by then, paid for by electricity users through higher bills. The National Audit Office published the findings of its investigation in a report titled "Rolling out smart meters" published in November 2018. The report, amongst other findings, indicated that the number of smart meters installed in the UK would fall materially short of the Department for Business, Energy & Industrial Strategy (BEIS) original ambitions of all UK consumers having a smart meter installed by 2020. In September 2019, smart meter rollout in the UK was delayed for four years.
Ross Anderson and Alex Henney wrote that "Ed Miliband cooked the books" to make a case for smart meters appear economically viable. They say that the first three cost-benefit analyses of residential smart meters found that it would cost more than it would save, but "ministers kept on trying until they got a positive result... To achieve 'profitability' the previous government stretched the assumptions shamelessly".
A counter-fraud officer at Ofgem with oversight of the roll-out of the smart meter program who raised concerns with his manager about many millions of pounds being misspent was threatened in 2018 with imprisonment under section 105 of the Utilities Act 2000, prohibiting disclosure of some information relevant to the energy sector, with the intention of protecting national security. The Employment Appeal Tribunal found that the law was in contravention of the European Convention on Human Rights.
Main Suppliers
Top ten smart electricity meters suppliers depends on the ranking method
Among them
Landis+Gyr
Itron
Xylem (formerly Sensus)
Sagemcom
Honeywell / Elster
Kamstrup A/S
Wasion Holdings Limited
Holley Technology Ltd
Gallery
See also
DASH7
Distributed generation
DLMS
Electranet
Home energy monitor
Home idle load
Home network
Meter-Bus
Meter data management
Net metering
Nonintrusive load monitoring
Open metering system
Open smart grid protocol
Power line communication
Smart grid
Utility submetering
Virtual power plant
Notes
References
External links
TIA Smart Utility Networks - U.S. Standardization Process
Demand Response and Advanced Metering Coalition. Definitions.
Advanced Metering Infrastructure (AMI), Department of Primary Industries, Victoria, Australia
Smart Metering Projects Map - Google Maps
Mad about metered billing? They were in 1886, too—Ars Technica
UK Smart Meters
Energy measurement
Electric power distribution
Electric power
Meter
Internet of things | Smart meter | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 8,189 | [
"Physical quantities",
"Measuring instruments",
"Power (physics)",
"Electric power",
"Fluid dynamics",
"Electrical engineering",
"Flow meters"
] |
27,806,268 | https://en.wikipedia.org/wiki/Water%20window | The water window is a region of the electromagnetic spectrum in which water is transparent to soft x-rays. The window extends from the K-absorption edge of carbon at 282 eV (68 PHz, 4.40 nm wavelength) to the K-edge of oxygen at 533 eV (129 PHz, 2.33 nm wavelength). Water is transparent to these X-rays, but carbon and its organic compounds are absorbing. These wavelengths could be used in an x-ray microscope for viewing living specimens. This is technically challenging because few if any viable lens materials are available above extreme ultraviolet.
See also
electromagnetic absorption by water
x-ray absorption spectroscopy
References
X-rays | Water window | [
"Physics"
] | 137 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
27,808,402 | https://en.wikipedia.org/wiki/Organotrifluoroborate | Organotrifluoroborates are organoboron compounds that contain an anion with the general formula [RBF3]−. They can be thought of as protected boronic acids, or as adducts of carbanions and boron trifluoride. Organotrifluoroborates are tolerant of air and moisture and are easy to handle and purify. They are often used in organic synthesis as alternatives to boronic acids (RB(OH)2), boronate esters (RB(OR′)2), and organoboranes (R3B), particularly for Suzuki-Miyaura coupling.
Structure
Synthesis
Boronic acids RB(OH)2 react with potassium bifluoride K[HF2] to form trifluoroborate salts K[RBF3].
Reactivity
Organotrifluoroborates are strong nucleophiles and react with electrophiles without transition-metal catalysts.
Mechanism
The mechanism of organotrifluoroborate-based Suzuki-Miyaura coupling reactions has recently been investigated in detail. The organotrifluoroborate hydrolyses to the corresponding boronic acid in situ, so a boronic acid can be used in place of an organotrifluoroborate, as long as it is added slowly and carefully.
References
Organoboron compounds
Anions
Borates | Organotrifluoroborate | [
"Physics",
"Chemistry"
] | 303 | [
"Ions",
"Matter",
"Anions"
] |
27,809,884 | https://en.wikipedia.org/wiki/Kirchhoff%E2%80%93Love%20plate%20theory | The Kirchhoff–Love theory of plates is a two-dimensional mathematical model that is used to determine the stresses and deformations in thin plates subjected to forces and moments. This theory is an extension of Euler-Bernoulli beam theory and was developed in 1888 by Love using assumptions proposed by Kirchhoff. The theory assumes that a mid-surface plane can be used to represent a three-dimensional plate in two-dimensional form.
The following kinematic assumptions that are made in this theory:
straight lines normal to the mid-surface remain straight after deformation
straight lines normal to the mid-surface remain normal to the mid-surface after deformation
the thickness of the plate does not change during a deformation.
Assumed displacement field
Let the position vector of a point in the undeformed plate be . Then
The vectors form a Cartesian basis with origin on the mid-surface of the plate, and are the Cartesian coordinates on the mid-surface of the undeformed plate, and is the coordinate for the thickness direction.
Let the displacement of a point in the plate be . Then
This displacement can be decomposed into a vector sum of the mid-surface displacement and an out-of-plane displacement in the direction. We can write the in-plane displacement of the mid-surface as
Note that the index takes the values 1 and 2 but not 3.
Then the Kirchhoff hypothesis implies that
If are the angles of rotation of the normal to the mid-surface, then in the Kirchhoff-Love theory
Note that we can think of the expression for as the first order Taylor series expansion of the displacement around the mid-surface.
Quasistatic Kirchhoff-Love plates
The original theory developed by Love was valid for infinitesimal strains and rotations. The theory was extended by von Kármán to situations where moderate rotations could be expected.
Strain-displacement relations
For the situation where the strains in the plate are infinitesimal and the rotations of the mid-surface normals are less than 10° the strain-displacement relations are
where as .
Using the kinematic assumptions we have
Therefore, the only non-zero strains are in the in-plane directions.
Equilibrium equations
The equilibrium equations for the plate can be derived from the principle of virtual work. For a thin plate under a quasistatic transverse load pointing towards positive direction, these equations are
where the thickness of the plate is . In index notation,
where are the stresses.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for small rotations
|-
|For the situation where the strains and rotations of the plate are small the virtual internal energy is given by
where the thickness of the plate is and the stress resultants and stress moment resultants are defined as
Integration by parts leads to
The symmetry of the stress tensor implies that . Hence,
Another integration by parts gives
For the case where there are no prescribed external forces, the principle of virtual work implies that . The equilibrium equations for the plate are then given by
If the plate is loaded by an external distributed load that is normal to the mid-surface and directed in the positive direction, the external virtual work due to the load is
The principle of virtual work then leads to the equilibrium equations
|}
Boundary conditions
The boundary conditions that are needed to solve the equilibrium equations of plate theory can be obtained from the boundary terms in the principle of virtual work. In the absence of external forces on the boundary, the boundary conditions are
Note that the quantity is an effective shear force.
Constitutive relations
The stress-strain relations for a linear elastic Kirchhoff plate are given by
Since and do not appear in the equilibrium equations it is implicitly assumed that these quantities do not have any effect on the momentum balance and are neglected. The remaining stress-strain relations, in matrix form, can be written as
Then,
and
The extensional stiffnesses are the quantities
The bending stiffnesses (also called flexural rigidity) are the quantities
The Kirchhoff-Love constitutive assumptions lead to zero shear forces. As a result, the equilibrium equations for the plate have to be used to determine the shear forces in thin Kirchhoff-Love plates. For isotropic plates, these equations lead to
Alternatively, these shear forces can be expressed as
where
Small strains and moderate rotations
If the rotations of the normals to the mid-surface are in the range of 10 to 15, the strain-displacement relations can be approximated as
Then the kinematic assumptions of Kirchhoff-Love theory lead to the classical plate theory with von Kármán strains
This theory is nonlinear because of the quadratic terms in the strain-displacement relations.
If the strain-displacement relations take the von Karman form, the equilibrium equations can be expressed as
Isotropic quasistatic Kirchhoff-Love plates
For an isotropic and homogeneous plate, the stress-strain relations are
where is Poisson's Ratio and is Young's Modulus. The moments corresponding to these stresses are
In expanded form,
where for plates of thickness . Using the stress-strain relations for the plates, we can show that the stresses and moments are related by
At the top of the plate where , the stresses are
Pure bending
For an isotropic and homogeneous plate under pure bending, the governing equations reduce to
Here we have assumed that the in-plane displacements do not vary with and . In index notation,
and in direct notation
which is known as the biharmonic equation.
The bending moments are given by
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for pure bending
|-
|For an isotropic, homogeneous plate under pure bending the governing equations are
and the stress-strain relations are
Then,
and
Differentiation gives
and
Plugging into the governing equations leads to
Since the order of differentiation is irrelevant we have , , and . Hence
In direct tensor notation, the governing equation of the plate is
where we have assumed that the displacements are constant.
|}
Bending under transverse load
If a distributed transverse load pointing along positive direction is applied to the plate, the governing equation is . Following the procedure shown in the previous section we get
In rectangular Cartesian coordinates, the governing equation is
and in cylindrical coordinates it takes the form
Solutions of this equation for various geometries and boundary conditions can be found in the article on bending of plates.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations for transverse loading
|-
|For a transversely loaded plate without axial deformations, the governing equation has the form
where is a distributed transverse load (per unit area). Substitution of the expressions for the derivatives of into the governing equation gives
Noting that the bending stiffness is the quantity
we can write the governing equation in the form
In cylindrical coordinates ,
For symmetrically loaded circular plates, , and we have
|}
Cylindrical bending
Under certain loading conditions a flat plate can be bent into the shape of the surface of a cylinder. This type of bending is called cylindrical bending and represents the special situation where . In that case
and
and the governing equations become
Dynamics of Kirchhoff-Love plates
The dynamic theory of thin plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes.
Governing equations
The governing equations for the dynamics of a Kirchhoff-Love plate are
where, for a plate with density ,
and
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equations governing the dynamics of Kirchhoff-Love plates
|-
|
The total kinetic energy (more precisely, action of kinetic energy) of the plate is given by
Therefore, the variation in kinetic energy is
We use the following notation in the rest of this section.
Then
For a Kirchhof-Love plate
Hence,
Define, for constant through the thickness of the plate,
Then
Integrating by parts,
The variations and are zero at and .
Hence, after switching the sequence of integration, we have
Integration by parts over the mid-surface gives
Again, since the variations are zero at the beginning and the end of the time interval under consideration, we have
For the dynamic case, the variation in the internal energy is given by
Integration by parts and invoking zero variation at the boundary of the mid-surface gives
If there is an external distributed force acting normal to the surface of the plate, the virtual external work done is
From the principle of virtual work, or more precisely, Hamilton's principle for a deformable body, we have . Hence the governing balance equations for the plate are
|}
Solutions of these equations for some special cases can be found in the article on vibrations of plates. The figures below show some vibrational modes of a circular plate.
Isotropic plates
The governing equations simplify considerably for isotropic and homogeneous plates for which the in-plane deformations can be neglected. In that case we are left with one equation of the following form (in rectangular Cartesian coordinates):
where is the bending stiffness of the plate. For a uniform plate of thickness ,
In direct notation
For free vibrations, the governing equation becomes
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of dynamic governing equations for isotropic Kirchhoff-Love plates
|-
|
For an isotropic and homogeneous plate, the stress-strain relations are
where are the in-plane strains. The strain-displacement relations
for Kirchhoff-Love plates are
Therefore, the resultant moments corresponding to these stresses are
The governing equation for an isotropic and homogeneous plate of uniform thickness in the
absence of in-plane displacements is
Differentiation of the expressions for the moment resultants gives us
Plugging into the governing equations leads to
Since the order of differentiation is irrelevant we have . Hence
If the flexural stiffness of the plate is defined as
we have
For small deformations, we often neglect the spatial derivatives of the transverse acceleration of the
plate and we are left with
Then, in direct tensor notation, the governing equation of the plate is
|}
References
See also
Bending
Bending of plates
Infinitesimal strain theory
Linear elasticity
Plate theory
Stress (mechanics)
Stress resultants
Vibration of plates
Continuum mechanics
Gustav Kirchhoff | Kirchhoff–Love plate theory | [
"Physics"
] | 2,152 | [
"Classical mechanics",
"Continuum mechanics"
] |
27,810,332 | https://en.wikipedia.org/wiki/Bending%20of%20plates | Bending of plates, or plate bending, refers to the deflection of a plate perpendicular to the plane of the plate under the action of external forces and moments. The amount of deflection can be determined by solving the differential equations of an appropriate plate theory. The stresses in the plate can be calculated from these deflections. Once the stresses are known, failure theories can be used to determine whether a plate will fail under a given load.
Bending of Kirchhoff-Love plates
Definitions
For a thin rectangular plate of thickness , Young's modulus , and Poisson's ratio , we can define parameters in terms of the plate deflection, .
The flexural rigidity is given by
Moments
The bending moments per unit length are given by
The twisting moment per unit length is given by
Forces
The shear forces per unit length are given by
Stresses
The bending stresses are given by
The shear stress is given by
Strains
The bending strains for small-deflection theory are given by
The shear strain for small-deflection theory is given by
For large-deflection plate theory, we consider the inclusion of membrane strains
Deflections
The deflections are given by
Derivation
In the Kirchhoff–Love plate theory for plates the governing equations are
and
In expanded form,
and
where is an applied transverse load per unit area, the thickness of the plate is , the stresses are , and
The quantity has units of force per unit length. The quantity has units of moment per unit length.
For isotropic, homogeneous, plates with Young's modulus and Poisson's ratio these equations reduce to
where is the deflection of the mid-surface of the plate.
Small deflection of thin rectangular plates
This is governed by the Germain-Lagrange plate equation
This equation was first derived by Lagrange in December 1811 in correcting the work of Germain who provided the basis of the theory.
Large deflection of thin rectangular plates
This is governed by the Föppl–von Kármán plate equations
where is the stress function.
Circular Kirchhoff-Love plates
The bending of circular plates can be examined by solving the governing equation with
appropriate boundary conditions. These solutions were first found by Poisson in 1829.
Cylindrical coordinates are convenient for such problems. Here is the distance of a point from the midplane of the plate.
The governing equation in coordinate-free form is
In cylindrical coordinates ,
For symmetrically loaded circular plates, , and we have
Therefore, the governing equation is
If and are constant, direct integration of the governing equation gives us
where are constants. The slope of the deflection surface is
For a circular plate, the requirement that the deflection and the slope of the deflection are finite
at implies that . However, need not equal 0, as the limit
of exists as you approach from the right.
Clamped edges
For a circular plate with clamped edges, we have and at the edge of
the plate (radius ). Using these boundary conditions we get
The in-plane displacements in the plate are
The in-plane strains in the plate are
The in-plane stresses in the plate are
For a plate of thickness , the bending stiffness is and we
have
The moment resultants (bending moments) are
The maximum radial stress is at and :
where . The bending moments at the boundary and the center of the plate are
Rectangular Kirchhoff-Love plates
For rectangular plates, Navier in 1820 introduced a simple method for finding the displacement and stress when a plate is simply supported. The idea was to express the applied load in terms of Fourier components, find the solution for a sinusoidal load (a single Fourier component), and then superimpose the Fourier components to get the solution for an arbitrary load.
Sinusoidal load
Let us assume that the load is of the form
Here is the amplitude, is the width of the plate in the -direction, and
is the width of the plate in the -direction.
Since the plate is simply supported, the displacement along the edges of
the plate is zero, the bending moment is zero at and , and
is zero at and .
If we apply these boundary conditions and solve the plate equation, we get the
solution
Where D is the flexural rigidity
Analogous to flexural stiffness EI. We can calculate the stresses and strains in the plate once we know the displacement.
For a more general load of the form
where and are integers, we get the solution
Navier solution
Double trigonometric series equation
We define a general load of the following form
where is a Fourier coefficient given by
.
The classical rectangular plate equation for small deflections thus becomes:
Simply-supported plate with general load
We assume a solution of the following form
The partial differentials of this function are given by
Substituting these expressions in the plate equation, we have
Equating the two expressions, we have
which can be rearranged to give
The deflection of a simply-supported plate (of corner-origin) with general load is given by
Simply-supported plate with uniformly-distributed load
For a uniformly-distributed load, we have
The corresponding Fourier coefficient is thus given by
.
Evaluating the double integral, we have
,
or alternatively in a piecewise format, we have
The deflection of a simply-supported plate (of corner-origin) with uniformly-distributed load is given by
The bending moments per unit length in the plate are given by
Lévy solution
Another approach was proposed by Lévy in 1899. In this case we start with an assumed form of the displacement and try to fit the parameters so that the governing equation and the boundary conditions are satisfied. The goal is to find such that it satisfies the boundary conditions at and and, of course, the governing equation .
Let us assume that
For a plate that is simply-supported along and , the boundary conditions are and . Note that there is no variation in displacement along these edges meaning that and , thus reducing the moment boundary condition to an equivalent expression .
Moments along edges
Consider the case of pure moment loading. In that case and
has to satisfy . Since we are working in rectangular
Cartesian coordinates, the governing equation can be expanded as
Plugging the expression for in the governing equation gives us
or
This is an ordinary differential equation which has the general solution
where are constants that can be determined from the boundary
conditions. Therefore, the displacement solution has the form
Let us choose the coordinate system such that the boundaries of the plate are
at and (same as before) and at (and not and
). Then the moment boundary conditions at the boundaries are
where are known functions. The solution can be found by
applying these boundary conditions. We can show that for the symmetrical case
where
and
we have
where
Similarly, for the antisymmetrical case where
we have
We can superpose the symmetric and antisymmetric solutions to get more general
solutions.
Simply-supported plate with uniformly-distributed load
For a uniformly-distributed load, we have
The deflection of a simply-supported plate with centre with uniformly-distributed load is given by
The bending moments per unit length in the plate are given by
Uniform and symmetric moment load
For the special case where the loading is symmetric and the moment is uniform, we have at ,
The resulting displacement is
where
The bending moments and shear forces corresponding to the displacement are
The stresses are
Cylindrical plate bending
Cylindrical bending occurs when a rectangular plate that has dimensions , where and the thickness is small, is subjected to a uniform distributed load perpendicular to the plane of the plate. Such a plate takes the shape of the surface of a cylinder.
Simply supported plate with axially fixed ends
For a simply supported plate under cylindrical bending with edges that are free to rotate but have a fixed . Cylindrical bending solutions can be found using the Navier and Levy techniques.
Bending of thick Mindlin plates
For thick plates, we have to consider the effect of through-the-thickness shears on the orientation of the normal to the mid-surface after deformation. Raymond D. Mindlin's theory provides one approach for find the deformation and stresses in such plates. Solutions to Mindlin's theory can be derived from the equivalent Kirchhoff-Love solutions using canonical relations.
Governing equations
The canonical governing equation for isotropic thick plates can be expressed as
where is the applied transverse load, is the shear modulus,
is the bending rigidity, is the plate thickness, ,
is the shear correction factor, is the Young's modulus, is the Poisson's
ratio, and
In Mindlin's theory, is the transverse displacement of the mid-surface of the plate
and the quantities and are the rotations of the mid-surface normal
about the and -axes, respectively. The canonical parameters for this theory
are and . The shear correction factor usually has the
value .
The solutions to the governing equations can be found if one knows the corresponding
Kirchhoff-Love solutions by using the relations
where is the displacement predicted for a Kirchhoff-Love plate, is a
biharmonic function such that , is a function that satisfies the
Laplace equation, , and
Simply supported rectangular plates
For simply supported plates, the Marcus moment sum vanishes, i.e.,
Which is almost Laplace`s equation for w[ref 6]. In that case the functions , , vanish, and the Mindlin solution is
related to the corresponding Kirchhoff solution by
Bending of Reissner-Stein cantilever plates
Reissner-Stein theory for cantilever plates leads to the following coupled ordinary differential equations for a cantilever plate with concentrated end load at .
and the boundary conditions at are
Solution of this system of two ODEs gives
where . The bending moments and shear forces corresponding to the displacement
are
The stresses are
If the applied load at the edge is constant, we recover the solutions for a beam under a
concentrated end load. If the applied load is a linear function of , then
See also
Bending
Infinitesimal strain theory
Kirchhoff–Love plate theory
Linear elasticity
Mindlin–Reissner plate theory
Plate theory
Stress (mechanics)
Stress resultants
Structural acoustics
Vibration of plates
References
Continuum mechanics | Bending of plates | [
"Physics"
] | 2,047 | [
"Classical mechanics",
"Continuum mechanics"
] |
27,815,296 | https://en.wikipedia.org/wiki/Policy%20and%20charging%20rules%20function | Policy and Charging Rules Function (PCRF) is the software node designated in real-time to determine policy rules in a multimedia network. As a policy tool, the PCRF plays a central role in next-generation networks. Unlike earlier policy engines that were added onto an existing network to enforce policy, the PCRF is a software component that operates at the network core and accesses subscriber databases and other specialized functions, such as a charging system, in a centralized manner. Because it operates in real time, the PCRF has an increased strategic significance and broader potential role than traditional policy engines. This has led to a proliferation of PCRF products since 2008.
The PCRF is the part of the network architecture that aggregates information to and from the network, operational support systems, and other sources (such as portals) in real time, supporting the creation of rules and then automatically making policy decisions for each subscriber active on the network. Such a network might offer multiple services, quality of service (QoS) levels, and charging rules. PCRF can provide a network agnostic solution (wire line and wireless) and can also enable multi-dimensional approach which helps in creating a lucrative and innovative platform for operators. PCRF can also be integrated with different platforms like billing, rating, charging, and subscriber database or can also be deployed as a standalone entity.
PCRF plays a key role in VoLTE as a mediator of network resources for the IP Multimedia Systems network for establishing the calls and allocating the requested bandwidth to the call bearer with configured attributes. This enables an operator to offer differentiated voice services to their user(s) by charging a premium. Operators also have an opportunity to use PCRF for prioritizing the calls to emergency numbers in the next-gen networks.
References
External links
Mobile Broadband & the Rise of Policy: Technology Review & Forecast
"It's all a matter of policy," Telecom TV
3GPP TS 23.203 - Policy and charging control architecture
Internet Protocol
Network architecture
Telecommunications engineering
Telecommunications infrastructure | Policy and charging rules function | [
"Engineering"
] | 413 | [
"Network architecture",
"Electrical engineering",
"Telecommunications engineering",
"Computer networks engineering"
] |
27,817,581 | https://en.wikipedia.org/wiki/Division%20of%20Signal%20Transduction%20Therapy | The Division of Signal Transduction Therapy or DSTT is an organization managed by the University of Dundee, the Medical Research Council, and the pharmaceutical companies AstraZeneca, Boehringer Ingelheim, GlaxoSmithKline, Merck Serono, Janssen Pharmaceutica, and Pfizer. The purpose of the collaboration is to conduct cell signalling research and to encourage development of new drug treatments for global diseases such as cancer, rheumatoid arthritis, and Parkinson's disease. Specifically the collaboration aims to target protein kinases and the ubiquitylation system in the development of these therapies. It is one of the largest ever collaborations between the commercial pharmaceutical industry and any academic research institute.
Organizational resources and management
The organization was founded by Professor Sir Philip Cohen and Professor Pete Downes in 1998. In 2003 the organization's existence was renewed with £15 million funding, and in 2008 further renewed with £11 million. In July 2012 the collaboration was renewed once more with core support funding of £14.4 million under the directorship of Professor Dario Alessi.
It is made up from fifteen research teams based at the University of Dundee and along with support personnel totals nearly 200 members of staff. Thirteen of the teams are based within the MRC Protein Phosphorylation and Ubiquitylation Unit at the College of Life Sciences. The amount of funding and staff make DSTT the largest collaboration between the for-profit pharmaceutical industry and a university in the United Kingdom.
Under the DSTT's agreement, the commercial companies and the DSTT share access to their unpublished results, equipment, and staff expertise in the participating laboratories. The university staff gets steady funding, while the commercial companies get rights to license certain intellectual property produced. The DSTT does not conduct contract research on behalf of member companies; 60% of the budget is consumed by basic research chosen by the companies and the remaining 40% is used to provide analytical services and maintain the collection of reagents. The DSTT itself produces protein and lipid kinases, phosphatases and ubiquitin reagents for member companies to use in research and as targets for high-throughput screening. These reagents are prerequisites to the development of new drug leads, and the variety kept available by the DSTT is vast compared to what typical laboratories keep.
Research
The focus of the DSTT is the study of protein phosphorylation and ubiquitylation.
Protein phosphorylation is a principal control mechanism in almost all aspects of cellular regulation of most organisms. Abnormalities in phosporylation contribute to many classes of diseases including cancer, diabetes, and rheumatoid arthritis.
Awards
The University of Dundee received a Queen's Anniversary Prize in recognition of the DSTT being a model for research sharing between academic and commercial sectors. Elizabeth II and Prince Philip presented the prize on 16 February 2006.
References
External links
Division of Signal Transduction Therapy
Biomedical research foundations
University of Dundee
Research institutes in the United Kingdom | Division of Signal Transduction Therapy | [
"Engineering",
"Biology"
] | 625 | [
"Biotechnology organizations",
"Biomedical research foundations"
] |
24,522,629 | https://en.wikipedia.org/wiki/Shunt%20regulated%20push-pull%20amplifier | A shunt regulated push-pull amplifier is a Class A amplifier whose output drivers (transistors or more commonly vacuum tubes) operate in antiphase. The key design element is the output stage also serves as the phase splitter.
The acronym SRPP is also used to describe a series regulated push-pull amplifier.
History
The earliest vacuum tubes based circuit reference is a patent by Henry Clough of the Marconi company filed in 1940. It proposes its use as a modulator, but also mentions an audio amplifier use.
Other patents mention this circuit later in a slightly modified form, but it is not widely used until 1951, when Peterson and Sinclair finally adapted and patented the SRPP for audio use. Variety of transistor based versions appeared after the 1960s.
References
External links
page at tubecad.com
article at The Valve Wizard
US Patent 2802907 by Peterson and Sinclair (1957)
Electronic amplifiers | Shunt regulated push-pull amplifier | [
"Technology"
] | 186 | [
"Electronic amplifiers",
"Amplifiers"
] |
24,525,188 | https://en.wikipedia.org/wiki/Locally%20decodable%20code | A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining (or querying) a small number of bits of a possibly corrupted codeword.
This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.
Codewords are generated from the original message using an algorithm that introduces a certain amount of redundancy into the codeword; thus, the codeword is always longer than the original message. This redundancy is distributed across the codeword and allows the original message to be recovered with good probability even in the presence of errors. The more redundant the codeword, the more resilient it is against errors, and the fewer queries required to recover a bit of the original message.
Overview
More formally, a -locally decodable code encodes an -bit message to an -bit codeword such that any bit of the message can be recovered with probability by using a randomized decoding algorithm that queries only bits of the codeword , even if up to locations of the codeword have been corrupted.
Furthermore, a perfectly smooth local decoder is a decoder such that, in addition to always generating the correct output given access to an uncorrupted codeword, for every and the query to recover the bit is uniform over .
(The notation denotes the set ). Informally, this means that the set of queries required to decode any given bit are uniformly distributed over the codeword.
Local list decoders are another interesting subset of local decoders. List decoding is useful when a codeword is corrupted in more than places, where is the minimum Hamming distance between two codewords. In this case, it is no longer possible to identify exactly which original message has been encoded, since there could be multiple codewords within distance of the corrupted codeword. However, given a radius , it is possible to identify the set of messages that encode to codewords that are within of the corrupted codeword. An upper bound on the size of the set of messages can be determined by and .
Locally decodable codes can also be concatenated, where a message is encoded first using one scheme, and the resulting codeword is encoded again using a different scheme. (Note that, in this context, concatenation is the term used by scholars to refer to what is usually called composition; see ). This might be useful if, for example, the first code has some desirable properties with respect to rate, but it has some undesirable property, such as producing a codeword over a non-binary alphabet. The second code can then transform the result of the first encoding over a non-binary alphabet to a binary alphabet. The final encoding is still locally decodable, and requires additional steps to decode both layers of encoding.
Length of codeword and query complexity
The rate of a code refers to the ratio between its message length and codeword length: , and the number of queries required to recover 1 bit of the message is called the query complexity of a code.
The rate of a code is inversely related to the query complexity, but the exact shape of this tradeoff is a major open problem. It is known that there are no LDCs that query the codeword in only one position, and that the optimal codeword size for query complexity 2 is exponential in the size of the original message. However, there are no known tight lower bounds for codes with query complexity greater than 2. Approaching the tradeoff from the side of codeword length, the only known codes with codeword length proportional to message length have query complexity for There are also codes in between, that have codewords polynomial in the size of the original message and polylogarithmic query complexity.
Applications
Locally decodable codes have applications to data transmission and storage, complexity theory, data structures, derandomization, theory of fault tolerant computation, and private information retrieval schemes.
Data transmission and storage
Locally decodable codes are especially useful for data transmission over noisy channels. The Hadamard code (a special case of Reed Muller codes) was used in 1971 by Mariner 9 to transmit pictures of Mars back to Earth. It was chosen over a 5-repeat code (where each bit is repeated 5 times) because, for roughly the same number of bits transmitted per pixel, it had a higher capacity for error correction. (The Hadamard code falls under the general umbrella of forward error correction, and just happens to be locally decodable; the actual algorithm used to decode the transmission from Mars was a generic error-correction scheme.)
LDCs are also useful for data storage, where the medium may become partially corrupted over time, or the reading device is subject to errors. In both cases, an LDC will allow for the recovery of information despite errors, provided that there are relatively few. In addition, LDCs do not require that the entire original message be decoded; a user can decode a specific portion of the original message without needing to decode the entire thing.
Complexity theory
One of the applications of locally decodable codes in complexity theory is hardness amplification. Using LDCs with polynomial codeword length and polylogarithmic query complexity, one can take a function that is hard to solve on worst case inputs and design a function that is hard to compute on average case inputs.
Consider limited to only length inputs. Then we can see as a binary string of length , where each bit is for each . We can use a polynomial length locally decodable code with polylogarithmic query complexity that tolerates some constant fraction of errors to encode the string that represents to create a new string of length . We think of this new string as defining a new problem on length inputs. If is easy to solve on average, that is, we can solve correctly on a large fraction of inputs, then by the properties of the LDC used to encode it, we can use to probabilistically compute on all inputs. Thus, a solution to for most inputs would allow us to solve on all inputs, contradicting our assumption that is hard on worst case inputs.
Private information retrieval schemes
A private information retrieval scheme allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved. One common way of ensuring privacy is to have separate, non-communicating servers, each with a copy of the database. Given an appropriate scheme, the user can make queries to each server that individually do not reveal which bit the user is looking for, but which together provide enough information that the user can determine the particular bit of interest in the database.
One can easily see that locally decodable codes have applications in this setting. A general procedure to produce a -server private information scheme from a perfectly smooth -query locally decodable code is as follows:
Let be a perfectly smooth LDC that encodes -bit messages to -bit codewords. As a preprocessing step, each of the servers encodes the -bit database with the code , so each server now stores the -bit codeword . A user interested in obtaining the bit of randomly generates a set of queries such that can be computed from using the local decoding algorithm for . The user sends each query to a different server, and each server responds with the bit requested. The user then uses to compute from the responses.
Because the decoding algorithm is perfectly smooth, each query is uniformly distributed over the codeword; thus, no individual server can gain any information about the user's intentions, so the protocol is private as long as the servers do not communicate.
Examples
The Hadamard code
The Hadamard (or Walsh-Hadamard) code is an example of a simple locally decodable code that maps a string of length to a codeword of length . The codeword for a string is constructed as follows: for every , the bit of the codeword is equal to , where (mod 2). It is easy to see that every codeword has a Hamming distance of from every other codeword.
The local decoding algorithm has query complexity 2, and the entire original message can be decoded with good probability if the codeword is corrupted in less than of its bits. For , if the codeword is corrupted in a fraction of places, a local decoding algorithm can recover the bit of the original message with probability .
Proof: Given a codeword and an index , the algorithm to recover the bit of the original message works as follows:
Let refer to the vector in that has 1 in the position and 0s elsewhere. For , denotes the single bit in that corresponds to . The algorithm chooses a random vector and the vector (where denotes bitwise XOR). The algorithm outputs (mod 2).
Correctness: By linearity,
But , so we just need to show that and with good probability.
Since and are uniformly distributed (even though they are dependent), the union bound implies that and with probability at least . Note: to amplify the probability of success, one can repeat the procedure with different random vectors and take the majority answer.
The Reed–Muller code
The main idea behind local decoding of Reed-Muller codes is polynomial interpolation. The key concept behind a Reed-Muller code is a multivariate polynomial of degree on variables. The message is treated as the evaluation of a polynomial at a set of predefined points. To encode these values, a polynomial is extrapolated from them, and the codeword is the evaluation of that polynomial on all possible points. At a high level, to decode a point of this polynomial, the decoding algorithm chooses a set of points on a line that passes through the point of interest . It then queries the codeword for the evaluation of the polynomial on points in and interpolates that polynomial. Then it is simple to evaluate the polynomial at the point that will yield . This roundabout way of evaluating is useful because (a) the algorithm can be repeated using different lines through the same point to improve the probability of correctness, and (b) the queries are uniformly distributed over the codeword.
More formally, let be a finite field, and let be numbers with . The Reed-Muller code with parameters is the function RM : that maps every -variable polynomial over of total degree to the values of on all the inputs in . That is, the input is a polynomial of the form
specified by the interpolation of the values of the predefined points and the output is the sequence for every .
To recover the value of a degree polynomial at a point , the local decoder shoots a random affine line through . Then it picks points on that line, which it uses to interpolate the polynomial and then evaluate it at the point where the result is . To do so, the algorithm picks a vector uniformly at random and considers the line through . The algorithm picks an arbitrary subset of , where , and queries coordinates of the codeword that correspond to points for all and obtains values . Then it uses polynomial interpolation to recover the unique univariate polynomial with degree less than or equal to such that for all . Then, to get the value of , it just evaluates . To recover a single value of the original message, one chooses to be one of the points that defines the polynomial.
Each individual query is distributed uniformly at random over the codeword. Thus, if the codeword is corrupted in at most a fraction of locations, by the union bound, the probability that the algorithm samples only uncorrupted coordinates (and thus correctly recovers the bit) is at least .
For other decoding algorithms, see.
See also
Private information retrieval
Linear cryptanalysis
References
Error detection and correction | Locally decodable code | [
"Engineering"
] | 2,463 | [
"Error detection and correction",
"Reliability engineering"
] |
24,526,038 | https://en.wikipedia.org/wiki/C16H14O2 | {{DISPLAYTITLE:C16H14O2}}
The molecular formula C16H14O2 (molar mass: 238.28 g/mol, exact mass: 238.09938 u) may refer to:
Benzyl cinnamate
Methyl hydroxychalcone (MCHP)
Molecular formulas | C16H14O2 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,526,762 | https://en.wikipedia.org/wiki/Reticulated%20foam | Reticulated foam is a very porous, low-density solid foam. 'Reticulated' means like a net. Reticulated foams are extremely open foams i.e. there are few, if any, intact bubbles or cell windows. In contrast, the foam formed by soap bubbles is composed solely of intact (fully enclosed) bubbles. In a reticulated foam only the lineal boundaries where the bubbles meet (Plateau borders) remain.
The solid component of a reticulated foam may be an organic polymer like polyurethane, a ceramic, or a metal. These materials are used in a wide range of applications where the high porosity and large surface area are needed, including filters, catalyst supports, fuel tank inserts, and loudspeaker covers.
Structure and properties
A description of the structure of reticulated foams is still being developed. While Plateau's laws, the rules governing the shape of soap films in foams were developed in the 19th century, a mathematical description of the structure is still debated. The computer-generated Weaire–Phelan structure is the most recent. In a reticulated foam only the edges of the polyhedra remain; the faces are missing. In commercial reticulated foam, up to 98% of the faces are removed. The dodecahedron is sometimes given as the basic unit for these foams, but the most representative shape is a polyhedron with 13 faces. Cell size and cell size distribution are critical parameters for most applications. Porosity is typically 95%, but can be as high as 98%. Reticulation affects many of the physical properties of a foam. Typically resistance to compression is decreased while tensile properties like elongation and resistance to tearing are increased.
Production
Robert A. Volz is credited with discovering the first process for making reticulated polyurethane foam in 1956 while working for the Scott Paper Company. Production of reticulated polyurethane foam is a two-step process that begins with the creation of conventional (closed-cell) polyurethane foam, after which cell faces (or "windows") are removed. To do so, the fact that the higher surface area and lower mass of cell faces compared with cell struts (or edges) makes them much more susceptible to both combustion and chemical degradation is exploited. Thus, closed-cell foam is either filled with a combustible gas like hydrogen and ignited under controlled conditions, or it is exposed to a sodium hydroxide solution to chemically degrade the foam, which will remove cell windows whilst sparing the edges.
Reticulated ceramic foams are made by coating a reticulated polyurethane foam with an aqueous suspension of a ceramic powder then heating the material to first evaporate the water then fuse the ceramic particles and finally to burn off the organic polymer.
Reticulated metal foam can also be made using polyurethane foam as a template similar to its use in ceramic foams. Metals can be vapor deposited onto the polyurethane foam and then the organic polymer burned off.
Applications
Reticulated foams are used where porosity, surface area, and low density are important.
Puppets (such as the bodies/faces/hands of The Muppets)
Humidifier pads
Air conditioner filters
Scrubbers
Ceramic filters for filtering molten metal
Vehicle and bacteria filters
Speaker grills
Face mask and pads
Outdoor cushions
Marine seating
Shoe polish and cosmetic applicators
Ink jet cartridges
Aquaculture (water purification)
Anti-slosh filling in fuel tanks for aircraft (such as the A-10 Thunderbolt II) and race cars
References
External links
Reticulation Process, FXI
Ceramic materials
Foams
Materials science
Plastics
Polyurethanes | Reticulated foam | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 764 | [
"Applied and interdisciplinary physics",
"Foams",
"Unsolved problems in physics",
"Materials science",
"Ceramic materials",
"nan",
"Ceramic engineering",
"Amorphous solids",
"Plastics"
] |
24,528,574 | https://en.wikipedia.org/wiki/Scintillating%20bolometer | A scintillating bolometer (or luminescent bolometer) is a scientific instrument using particle physics in the search for events with low energy deposition. These events could include dark matter, low energy solar neutrinos, double beta decay or rare radioactive decay. It works by simultaneously measuring both the light pulse and heat pulse generated by a particle interaction within its internal scintillator crystal. The device was originally proposed by L. Gonzalez-Mestres and D. Perret-Gallix (LAPP, IN2P3/CNRS)
In their rapporteur contribution to the Proceedings of the XXIV International Conference on High-Energy Physics, Munich, August 1988, Gonzalez-Mestres and Perret-Gallix wrote :
Perhaps bolometry should in some cases be combined with other detection techniques (luminescence?) in order to produce a primary fast signal as timing strobe. If light is used as a complementary signature, particle identification can be achieved through the heat-light ratio, where nucleus recoil is expected to be less luminescent than ionizing particles. The success of such a development would open the way to unprecedented achievements in background rejection for rare event experiments.
Further explanations, including a description of the detector and possible applications incorporating in particular BGO and tungstates, were given by these authors in other papers such as their contribution to the March 1989 Moriond Meeting (pages 16–18).
The luminescent bolometer has since then been developed by scientists from several groups, including the CNRS Institut d'Astrophysique Spatiale and University of Zaragoza collaboration in view of the proposed ROSEBUD particle detector experiment in the Canfranc Underground Laboratory. Rosebud uses a bismuth germanate (BiGeO, "BGO") detector crystal.
The CRESST collaboration is currently using the same kind of device with CaWO4 crystals in an experiment to detect dark matter at Laboratori Nazionali del Gran Sasso.
References
External links
(August 1988), published in Nuclear Instruments and Methods in Physics Research (July 1999).
Scientific instruments | Scintillating bolometer | [
"Technology",
"Engineering"
] | 437 | [
"Scientific instruments",
"Measuring instruments"
] |
24,530,443 | https://en.wikipedia.org/wiki/Electron-stimulated%20luminescence | Electron-stimulated luminescence (ESL) is production of light by cathodoluminescence, i.e. by a beam of electrons made to hit a fluorescent phosphor surface. This is also the method used to produce light in a cathode ray tube (CRT). Experimental light bulbs that were made using this technology do not include magnetic or electrostatic means to deflect the electron beam.
A cathodoluminescent light has a transparent glass envelope coated on the inside with a light-emitting phosphor layer. Electrons emitted from a cathode strike the phosphor; the current returns through a transparent conductive coating on the envelope. The phosphor layer emits light through the transparent face of the envelope. The system has a power supply providing at least 5kVDC to the light emitting device, and the electrons transiting from cathode to anode are essentially unfocused. Additional circuits allow TRIAC-type dimmers to control the light level. Sample produced with lights produced so far have a color rendering index of 90. The energy consumption can be 70% less than that of a standard incandescent light bulb. Claimed lifetime can be as long as 10,000 hours which is more than ten times that of a standard incandescent light bulb.
Unlike fluorescent lamps, which produce light through the electrical excitation of mercury vapor, ESL lamps do not use mercury. The first commercially available ESL product was a reflector bulb.
Drawbacks include high weight, a slightly larger-than-normal base and – as with all cathode ray tubes – when switched on, a slight delay before illumination begins and a static charge which attracts dust to the bulb face. As of 2016 the cost is higher and claimed efficiency is less than half that of commercially available LED bulbs, although it is considerably better than that of traditional incandescent lamps.
History
In 1958, Ferranti introduced a line of flood beam CRT-type stroboscope lamps.
Following delays, one company, called Vu1 Corporation, released ESL lamp samples in 2011. The company has not continued in operation.
See also
CRT projector
Nimo tube
Electroluminescence
Electroluminescent display (ELD)
Fluorescence
Fluorescent lamp
List of light sources
References
External links
Patent application with description (filed 2008-02-05)
Cathodoluminescent lamps: A clean and energy-efficient complement to LEDs
Prototype of cathodoluminescent lamp for general lighting using carbon fiber field emission cathode
Cathodoluminescent UV Sources for Biomedical Applications
Types of lamp
Glass applications
Energy-saving lighting
Vacuum tubes | Electron-stimulated luminescence | [
"Physics"
] | 554 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
24,530,565 | https://en.wikipedia.org/wiki/Glasmuseet%20Ebeltoft | Glasmuseet Ebeltoft is a museum in Ebeltoft, Denmark. It is dedicated to the exhibition and collection of contemporary glass art worldwide and also offers public demonstrations and seminars to glass students in its glass-blowing studio.
Establishment
The museum was founded in 1985 by Danish glass artists Finn Lynggaard and Tchai Munch. It is administered by the private Foundation for the Collection of Contemporary International Studio Glass. The museum makes its home in Ebeltofts's former Customs and Excise House; in 2006 a modern wing was added to the original building. In addition to exhibition spaces, the museum has a library, gift shop and cafe that are open to the public. Also in 2006 an enclosed garden and glass-blowing studio were added to the complex. The glass studio presents glass working demonstrations to the public and seminars for students of glass.
Lynggaard (1930-2011), originally a ceramicist, had lectured in Sheridan College in Oakville, Ontario, where he encountered the studio glass movement. Studio glass—starting in the early 1960s—embraced glass as a medium for independent sculpture artists, in contrast to utilitarian or conventionally decorative glass work (typically produced by craftsmen to the specifications of a designer, as with companies such as Tiffany & Co.).
“After raising the profile of glass in Gothenburg, London and elsewhere in Europe, in 1980 [Lynggaard] moved to Ebeltoft (‘Apple Hill’) – considered a ‘wilderness’ by some of his students – to set up a glassblowing studio.”
Exhibitions
The museum presents four to six exhibitions per year that focus on contemporary glass art. It shows experimental work by young artists in group and solo shows, as well as new work by well-established artists. Organized by museum staff, the exhibitions often include catalogs that are printed in several languages for the benefit of an international audience. Glasmuseet Ebeltoft often loans the exhibitions it creates to museums in other countries. In the past these countries have included China, England, Finland, Germany and the United States.
Collections
Glasmuseet Ebeltoft’s collection contains 1500 objects by 600 artists. Its holdings are composed of donations and loans of art works, obtained in most cases directly from the artists whose work is desired by the museum. The museum’s collection is most unusual in that artists who have been invited to be represented in it are permitted to exchange or supplement their works in the collection with new pieces.
Artists represented by work in the collection include Klaus Moje, Gerry King and Nick Mount of Australia; Paul Sanders of Australia ;Louis Leloup of Belgium; Václav Ciglar, František Janák, Jiří Harcuba, Pavel Hlava, Stanislav Libenský and Jaroslava Brychtová and René Roubíček of the Czech Republic; Eva Engström, Finn Lynggaard and Tchai Munch of Denmark; Ivo Lill of Estonia; Erwin Eisch, Ursula Merker, Gerhard Ribka, Kurt Wallstab and Ann Wolff of Germany; Ursula Huber-Peer, Pino Signoretto, Bruno Pedrosa and Lino Tagliapietra of Italy; Durk Valkema and Sybren Valkema of the Netherlands; Anna Carlgren, Gunnar Cyrén, Eva Englund, Göran Wärff and Ulrica Hydman-Vallien of Sweden, Charles Bray and David Reekie of the United Kingdom; and Rick Beck, Gary Beecham, William Bernstein, Katharine Bernstein, Nicole Chesney, Dale Chihuly, Fritz Dreisbach, Shane Fero, Robert Fritz, Michael Glancy, Richard Jolley, Jon Kuhn, Marvin Lipofsky, Harvey Littleton, John Littleton/Kate Vogel, Dante Marioni, Richard Marquis, Joel Philip Myers, Mary Shaffer, Paul Stankard, Michael Taylor and Toots Zynsky of the United States.
Sponsors
Because it receives no direct financial support from the Danish government, Glassmuseet Ebeltoft relies on sponsorships from Denmark's business community. The museum's three major sponsors are the textile company Kvadrat, Djursland Bank and NRGi energy company. Other corporate supporters include Primagaz, Montana Møbler, Blue Water Shipping, Cerama and Mois Linien.
Volunteer help is drawn from the museum's "Friends Society", which has about 900 members.
References
External links
Art museums and galleries in Denmark
Danish art
Glass museums and galleries
Art museums and galleries established in 1985
1985 establishments in Denmark
Buildings and structures in Syddjurs Municipality
Museums in the Central Denmark Region | Glasmuseet Ebeltoft | [
"Materials_science",
"Engineering"
] | 950 | [
"Glass engineering and science",
"Glass museums and galleries"
] |
24,530,569 | https://en.wikipedia.org/wiki/Catch%20bond | A catch bond is a type of noncovalent bond whose dissociation lifetime increases with tensile force applied to the bond. Normally, bond lifetimes are expected to diminish with force. In the case of catch bonds, the lifetime of the bond actually increases up to a maximum before it decreases like in a normal bond. Catch bonds work in a way that is conceptually similar to that of a Chinese finger trap. While catch bonds are strengthened by an increase in force, the force increase is not necessary for the bond to work. Catch bonds were suspected for many years to play a role in the rolling of leukocytes, being strong enough to roll in presence of high forces caused by high shear stresses, while avoiding getting stuck in capillaries where the fluid flow, and therefore shear stress, is low. The existence of catch bonds was debated for many years until strong evidence of their existence was found in bacteria. Definite proof of their existence came shortly thereafter in leukocytes.
Discovery
Catch bonds were first proposed in 1988 in the Proceedings of the Royal Society by M. Dembo et al. while at Los Alamos National Laboratory. While developing molecular model to study the critical tension required to detach a membrane bound to a surface through adhesion molecules, it was found that it is theoretically possible for bond dissociation to be increased by force, decreased by force, and independent of force. The terms "slip bond", "catch bond", and "ideal bond" were coined by Dembo to describe these three types of bond behaviors.
Slip bonds represent the ordinary behavior originally modeled by G. Bell, Dembo's former postdoctoral mentor at Los Alamos National Laboratory in 1978. Slip bonds were supported by flow chamber experiments where forces are applied on molecular bonds linking cells to chamber floor under shear flow. By comparison, no decisive evidence of catch bonds was found until 2003. This is due to experimental conditions that were unfavorable for detecting catch bonds, as well as the counterintuitive nature of the bonds themselves. For example, most early experiments were conducted in 96 well plates, an environment that does not provide any flow. Some experiments failed to produce shear stress that is now known to be critical to lengthen the lifetimes of catch bonds, while other experiments conducted under flow conditions too weak or too strong for optimal shear-induced strengthening of these bonds. Finally, Marshall and coworkers found that P-selectin:PSGL-1 bonds exhibited increasing bond lifetime as step loads were applied between 0 and ~10 pN for monomeric interaction but 1 and ~20 pN for dimeric interaction, exhibiting catch bond behavior; after reaching maximum values, which were ~0.6 and 1.2 seconds for monomeric and dimeric interaction, respectively, the bond lifetime fell rapidly at higher loads, displaying slip bond behavior ("catch-slip" bonds). These data were collected using an atomic force microscope and a flow chamber, and have subsequently been duplicated using a biomembrane force probe.
These finding prompted the discoveries of other important catch bonds in the 2000s, including those between L-selectin and PSGL-1 or endoglycan, FimH and mannose, myosin and actin, platelet glycoprotein Ib and von Willebrand factor, and integrin alpha 5 beta 1 and fibronectin. Emphasizing their importance and general acceptance, in the three years following their discovery there were at least 24 articles published on catch bonds.
More catch bonds were discovered in the 2010s, including E-selectin with carbohydrate ligands, G-actin with G-actin or F-actin, cadherin-catenin complex with actin, vinculin with F-actin, microtubule with kinetochore particle, integrin alpha L beta 2 and intercellular adhesion molecule 1 (ICAM-1), integrin alpha 4 beta 1 with vascular adhesion molecule 1, integrin alpha M beta 2 with ICAM-1, integrin alpha V beta 3 with fibronectin, and integrin alpha IIb beta 3 with fibronectin or fibrinogen.
Sivasankar and his research team have found that the mechanism behind the puzzling phenomenon is due to long-lived, force-induced hydrogen bonds. Using data from previous experiments, the team used molecular dynamics to discover that two rod-shaped cadherins in an X-dimer formed catch bonds when pulled and in the presence of calcium ions. The calcium ions keep the cadherins rigid, while pulling brings the proteins closer together, allowing for hydrogen bonds to form. The mechanism behind catch bonds helps to explain the biophysics behind cell-cell adhesion. According to the researchers, "Robust cadherin adhesion is essential for maintaining the integrity of tissue such as the skin, blood vessels, cartilage and muscle that are exposed to continuous mechanical assault."
The above catch bonds are formed between adhesion receptors and ligands, and among structural molecules and motor proteins, which bear force or generate force in their physiological function. An interesting recent development is the discoveries of catch bonds formed between signaling receptors and their ligands. These include bonds between T cell antigen receptors (TCR) or pre-TCR and peptide presented by major histocompatibility complex (pMHC) molecules, Fc gamma receptor and IgG Fc, and notch receptor and ligands. The presence of catch bonds in the interactions of these signaling (rather than adhesion) receptors have been suggested to be indicative of a possible role of these receptors as mechanoreceptors.
Variations and related dynamic bonds
Triphasic bonds
Other type of "dynamic bonds" have been defined in addition to the original types of catch bonds, slip bonds and ideal bonds classified by Dembo. Unlike slip bonds, which have been observed in the entire force range tested, catch bonds only exist within certain force range as any molecular bond would eventually be overpowered by high enough force. Therefore, catch bonds are always followed by slip bonds, hence termed "catch-slip bonds". More variations have also been observed, e.g., triphasic slip-catch-slip bonds.
Flex bonds
The transition between catch and slip bonds have been modeled as molecular dissociation from two bond states along two pathways. Dissociation along each pathway alone results in a slip bond but at different rates. At low forces, dissociation occurs predominately along the fast pathway. Increasing force tilts the multi-dimensional energy landscape to switch the dissociation from fast pathway to slow pathway, manifesting catch bond. As dissociation along the slow pathway dominates, further increase in force accelerates dissociation, manifesting slip bond. This switching behavior is also called flex bond.
Dynamic catch
The above bonds involve bimolecular interactions, which arguably represents the simplest types. A new type of catch bonds emerges when trimolecular interactions are involved. In such cases, one molecule can interact with the two counter-molecules using two binding sites, either separately, i.e. one at a time in the absence of the other to form bimolecular bonds, or concurrently to form a trimolecular bond when both counter-molecules are present. An interesting finding is that even when the two bimolecular interactions behave as slip bonds, the trimolecular interaction can behave as catch bond. This new type of catch bond, which requires concurrent and cooperative binding, is termed dynamic catch.
Cyclic mechanical reinforcement
Most catch bonds were demonstrated using force-clamp force spectroscopy where upon initial ramping, a constant force is loaded on the bond to observe how long the bond lasts, i.e., measuring the bond lifetime at a constant force. Catch bonds are revealed when the mean bond lifetime (reciprocally related to the rate of bond dissociation) increases with the clamped force. Zhu and colleagues demonstrated that bond lifetime measured at the force-clamp phase could be substantially prolonged if the initial ramping included two forms of pre-conditioning: 1) loading the bond by ramping the force to a high level (peak force) before clamping the force at a low level for lifetime measurement, and 2) loading and unloading the bond repeatedly by multiple force cycles before clamping the force at a peak value for lifetime measurement. This new bond type, termed cyclic mechanical reinforcement (CMR), is distinct from catch bond, but it nevertheless resembles catch bond in that the bond lifetime increases with the peak force and with the number of cycles used to pre-condition the bond. CMR has been observed for interactions between integrin alpha 5 beta 1 and fibronectin and between G-actin and G-actin or F-actin.
Force history dependence
The CMR phenomenon indicates that how long a bond can sustain force at a given level can depend on the history of force application prior to arriving at that force level. In other words, the "rate constant" of molecular dissociation at a constant force depends not only on the value of force at the current time but also on the prior force history the bond has experienced in the past. This has indeed been observed for interactions of P-selectin with PSGL-1 or anti-P-selectin antibody, L-selectin with PSGL-1, myosin with actin, integrin alpha V beta 3 with fibrinogen, and TCR with pMHC.
Various catch bonds of specific molecular interactions
Selectin bond
Background
Leukocytes, as well as other types of white blood cells, normally form weak and short-lived bonds with other cells via selectin. Coated outside the membrane of leukocytes are microvilli, which have various types of adhesive molecules, including P-selectin glycoprotein ligand-1 (PSGL-1), a glycoprotein that is normally decorated with sulfated sialyl-Lewis x. the sulfated-sialyl-Lewis-x-contained PSGL-1 molecule has the ability to bind to any type of selectin. Leukocytes also exhibit L-selectin that binds to other cells or other leukocytes that contain PSGL-1 molecules.
An important example of catch bonds is their role in leukocyte extravasation. During this process, leukocytes move through the circulatory system to sites of infection, and in doing so they 'roll' and bind to selectin molecules on the vessel wall. While able to float freely in the blood under normal circumstances, shear stress induced by inflammation causes leukocytes to attach to the endothelial vessel wall and begin rolling rather than floating downstream. This “shear-threshold phenomenon” was initially characterized in 1996 by Finger et al. who showed that leukocyte binding and rolling through L-selectin is only maintained when a critical shear-threshold is applied to the system. Multiple sources of evidence have shown that catch bonds are responsible for the tether and roll mechanism that allows this critical process to occur. Catch bonds allow increasing force to convert short-lived tethers into stronger, longer-lived binding interactions, thus decreasing the rolling velocity and increasing the regularity of rolling steps. However, this mechanism only works at an optimal force. As shear force increases past this force, bonds revert to slip bonds, creating an increase in velocity and irregularity of rolling.
Leukocytes adhesion mediated by shear stress
In blood vessel, at very low shear stress of ~.3 dynes per squared centimeter, leukocytes do not adhere to the blood vessel endothelial cells. Cells move along the blood vessel at a rate proportional to the blood flow rate. Once the shear stress pass that shear threshold value, leukocytes start to accumulate via selectin binding. At low shear stress above the threshold of about .3 to 5 dynes per squared centimeter, leukocytes alternate between binding and non-binding. Because one leukocyte has many selectins around the surface, these selectin binding/ unbinding cause a rolling motion on the blood vessel. As the shear stress continue to increase, the selectin bonds becomes stronger, causing the rolling velocity to be slower. This reduction in leukocytes rolling velocity allow cells to stop and perform firm binding via integrin binding.
Selectin binding do not exhibit "true" catch bond property. Experiments show that at very high shear stress (passing a second threshold), the selectin binding transit between a catch bond to a slip bond binding, in which the rolling velocity increases as the shear force increases.
Leukocyte rolling mediated by catch-slip transition
Researchers have hypothesized that the ability of leukocytes to maintain attachment and rolling on the blood vessel wall can be explained by a combination of many factors, including cell flattening to maintain a larger binding surface-area and reduce hydrodynamic drag, as well as tethers holding the rear of the rolling cell to the endothelium breaking and slinging to the front of the rolling cell to reattach to the endothelial wall. These hypotheses work well with Marshall's 2003 findings that selectin bonds go through a catch-slip transition in which initial increases in shear force strengthen the bond, but with enough applied force bond lifetimes begin to decay exponentially. Therefore, the weak binding of a sling at the leading edge of a rolling leukocyte would initially be strengthened as the cell rolls farther and the tension on the bond increases, preventing the cell from dissociating from the endothelial wall and floating freely in the bloodstream despite high shear forces. However, at the trailing edge of the cell, tension becomes high enough to transition the bond from catch to slip, and the bonds tethering the trailing edge eventually break, allowing the cell to roll further instead of remaining stationary.
Proposed mechanisms of action
Allosteric model
Though catch bonds are now widely recognized, their mechanism of action is still under dispute.
Two leading hypotheses dominate the discussion. The first hypothesis, the allosteric model, stems from evidence that x-ray crystallography of selectin proteins shows two conformational states: a bent conformation in the absence of ligand, and an extended conformation in the presence of the ligand. The main domains involved in these states are a lectin domain which contains the ligand binding site and an EGF domain which can shift between bent and extended conformations. The allosteric model claims that tension on the EGF domain favors the extended conformation, and extension of this domain causes a conformational shift in the lectin domain, resulting in greater binding affinity for the ligand. As a result of this conformational change, the ligand is effectively locked in place despite tension exerted on the bond.
Sliding-rebinding model
The sliding-rebinding model differs from the allosteric model in that the allosteric model posits that only one binding site exists and can be altered, but the sliding-rebinding model states that multiple binding sites exist and aren't changed by EGF extension. Rather, in the bent conformation which is favored at low applied forces, the applied force is perpendicular to the line of possible binding sites. Thus, when the association between ligand and lectin domain is interrupted, the bond quickly dissociates. At larger applied forces, however, the protein is extended and the line of possible binding sites is aligned with the applied force, allowing the ligand to quickly re-associate with a new binding site after the initial interaction is disrupted. With multiple binding sites, and even the ability to re-associate with the original binding site, the rate of ligand dissociation would be decreased as is typical of catch bonds.
Mechanism of a single selectin binding
A single PSGL-1 and selectin binding is similar to conventional protein binding when the force is kept constant, with a dissociation constant. As the force exerted starts to increase, the dissociation constant decreases, causing binding to become stronger. As the force reach a threshold level of 11 pN, the dissociation constant starts to increase again, weakening the bond, causing the bond to exhibit a slip bond property.
FimH bond
Background
Catch bonds also play a significant role in bacterial adhesion, most notably in Escherichia coli. E. coli and other bacteria residing in the intestine must be able to adhere to intestinal walls or risk being eliminated from the body through defecation. This is possible due to the bacterial protein FimH, which mediates high adhesion in response to high flow. The lectin domain is one that provides FimH binding the catch bond property when binding to mannose residues from other cells. Experiments have shown that when force is loaded rapidly, bonds were able to survive high forces, thus pointing to catch bond behavior. Catch bonds are responsible for the failure of E. coli in the urinary tract to be eliminated during urination, thus leading to a urinary tract infection. This knowledge is important not only in understanding bacteria, but also for learning how anti-adhesive technologies can be created.
Bacteria adhesion mediated by shear stress
Similar to selectin binding, FimH binding also have a threshold where it only starts binding to the host cells above this threshold. This shear stress threshold is about 1 dynes per squared centimeter, slightly larger than that of selectin binding. Above this threshold, FimH also alternate between binding, pause and unbinding with the mannose residues. However, different from selectin binding, FimH binding to mannose-BSA can either have a very long or very short pauses. This cause FimH binding to exhibit a "stick-and-roll" adhesion, not rolling adhesion in the case of selectin binding. And unlike selectin binding which requires integrin to help with firm adhesion, FimH binding can become stationary, and this process is reversible. All of this is mediated by shear stress level: at shear stress higher than 20 dynes per squared centimeter, FimH binding is stationary. At shear stress higher than 100 dynes per squared centimeter, slow rolling is observed.
See also
Noncovalent bonding
Ionic bond
Hydrogen bond
Van der Waals force
Intermolecular force
Slip bond
References
Chemical bonding
Biophysics
Cell adhesion | Catch bond | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 3,786 | [
"Applied and interdisciplinary physics",
"Biophysics",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
31,627,468 | https://en.wikipedia.org/wiki/Rhodocollybia%20maculata | Rhodocollybia maculata, commonly known as the spotted toughshank, is a species of basidiomycete fungus in the family Omphalotaceae. It often appears in decomposing conifer duff. R. maculata is a source of collybolide, a sesquiterpenoid containing a furyl-ẟ-lactone motif reminiscent of salvinorin A.
Description
The cap is cream-colored with red-brown spots. The edge remains inrolled for an extended period of time. The whitish gills are crowded, becoming spotted in age. The similarly colored stipe is long, tough, hollow, and tapered downwards.
A variety known as scorzonerea is characterized by yellowish color of its gills, and sometimes the stipe.
Edibility
Though non-toxic, this species is considered inedible due to its toughness and unpalatability; it is typically bitter.
Kappa-opioid receptor agonism
In 2016, Gupta et al. reported that collybolide exhibited high-potency, selective kappa-opioid receptor (KOR) agonism. Due to its attractive bioactivity and chemical similarity to salvinorin A, collybolide garnered attention in the synthetic chemistry and pharmacology fields as a potential scaffold for developing next-generation analgesics, antipruritics, and antidepressants.
In 2022, Shevick et al. completed the first enantioselective total synthesis of collybolide and profiled the activity of synthetic collybolide at the KOR. Despite previous findings by Gupta et al., these assays showed that neither enantiomer of collybolide had KOR activity. The synthetic sample was identical to natural collybolide isolated from R. maculata. Assays of crude R. maculata extracts by other groups additionally showed no KOR activity. These assays of synthetic and natural samples contradict the findings of Gupta et al., and suggest that collybolide and the other constituents of R. maculata have no activity at KOR.
Gallery
Notes
References
External links
Images
Fungi of North America
maculata
Taxa named by Johannes Baptista von Albertini
Taxa named by Lewis David de Schweinitz
Fungus species | Rhodocollybia maculata | [
"Biology"
] | 473 | [
"Fungi",
"Fungus species"
] |
31,629,357 | https://en.wikipedia.org/wiki/Phage-ligand%20technology | The Phage-ligand technology is a technology to detect, bind and remove bacteria and bacterial toxins by using highly specific bacteriophage derived proteins.
Origins
The host recognition of bacteriophages occur via bacteria-binding proteins that have strong binding affinities to specific protein or carbohydrate structures on the surface of the bacterial host. At the end of the infection life cycle the bacteria-lysing Endolysin is synthesized and degrades the bacterial peptidoglycan cell wall, resulting in lysis (and therefore killing) of the bacterial cell.
Applications
Bacteriophage derived proteins are used for detection and removal of bacteria and bacterial components (especially endotoxin contaminations) in pharmaceutical and biological products, human diagnostics, food, and decolonization of bacteria causing nosocomial infections (e.g. MRSA).
Protein modifications allow the biotechnological adaption to specific requirements.
See also
Affinity magnetic separation
References
Laboratory techniques
Molecular biology | Phage-ligand technology | [
"Chemistry",
"Biology"
] | 207 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
31,631,104 | https://en.wikipedia.org/wiki/Gas%20networks%20simulation | Gas networks simulation or gas pipeline simulation is a process of defining the mathematical model of gas transmission and gas distribution systems, which are usually composed of highly integrated pipe networks operating over a wide range of pressures. Simulation allows to predict the behaviour of gas network systems under different conditions. Such predictions can be effectively used to guide decisions regarding the design and operation of the real system.
Simulation types
Depending on the gas flow characteristics in the system there are two states that can be matter of simulation:
Steady state – the simulation does not take into account the gas flow characteristics' variations over time and described by the system of algebraic equations, in general nonlinear ones.
Unsteady state (transient flow analysis) – described either by a partial differential equation or a system of such equations. Gas flow characteristics are mainly functions of time.
Network topology
In the gas networks simulation and analysis, matrices turned out to be the natural way of expressing the problem. Any network can be described by set of matrices based on the network topology. Consider the gas network by the graph below. The network consists of one source node (reference node) L1, four load nodes (2, 3, 4 and 5) and seven pipes or branches. For network analysis it is necessary to select at least one reference node. Mathematically, the reference node is referred to as the independent node and all nodal and branch quantities are dependent on it. The pressure at source node is usually known, and this node is often used as the reference node. However, any node in the network may have its pressure defined and can be used as the reference node. A network may contain several sources or other pressure-defined nodes and these form a set of reference nodes for the network.
The load nodes are points in the network where load values are known. These loads may be positive, negative or zero. A negative load represents a demand for gas from the network. This may consist in supplying domestic or commercial consumers, filling gas storage holders, or even accounting for leakage in the network. A positive load represents a supply of gas to the network. This may consist in taking gas from storage, source or from another network. A zero load is placed on nodes that do not have a load but are used to represent a point of change in the network topology, such as the junction of several branches. For steady state conditions, the total load on the network is balanced by the inflow into the network at the source node.
The interconnection of a network can produce a closed path of branches, known as a loop. In figure, loop A consists of branches p12-p24-p14, loop B consists of p13-p34-p14, and loop C consists of p24-p25-p35-p34. A fourth loop may be defined as p12-p24-p34-p13, but it is redundant if loops A, B and C are also defined. Loops A, B and C are independent ones but the fourth one is not, as it can be derived from A, B and C by eliminating common branches.
To define the network topology completely it is necessary to assign a direction to each branch. Each branch direction is assigned arbitrarily and is assumed to be positive direction of flow in the branch. If the flow has the negative value, then the direction of flow is opposite to branch direction. In the similar way, direction is assigned to each loop and flow in the loop.
The solutions of problems involving gas network computation of any topology requires such a representation of the network to be found which enables the calculations to be performed in the most simple way. These requirements are met by the graph theory which permits representation of the network structure by means of the incidence properties of the network components and, in consequence, makes such a representation explicit.
Flow equations
The calculation of the pressure drop along the individual pipes of a gas network requires use of the flow equations. Many gas flow equations have been developed and a number have been used by the gas industry. Most are based on
the result of gas flow experiments. The result of the particular formula normally varies because these
experiments were conducted over different range of flow conditions, and on varying internal surface
roughness. Instead, each formula is applicable to a limited range of flow and pipe surface conditions.
Mathematical methods of simulation
Steady state analysis
A gas network is in the steady state when the values of gas flow characteristics are independent of time and system described by the set of nonlinear equations. The goal of simple simulation of a gas network is usually that of computing the values of nodes' pressures, loads and the values of flows in the individual pipes. The pressures at the nodes and the flow rates in the pipes must satisfy the flow equations, and together with nodes' loads must fulfill the first and second Kirchhoff's laws.
There are many methods of analyzing the mathematical models of gas networks but they can be divided into two types as the networks, the solvers for low pressure networks and solvers for high pressure networks.
The networks equations are nonlinear and are generally solved by some of Newton iteration; rather than use the full set of variables it is possible to eliminate some of them. Based on the type of elimination we can get solution techniques are termed either nodal or loop methods.
Newton-nodal method
The method is based on the set of the nodal equations which are simply mathematical representation of Kirchhoff's first law which states that the inlet and outlet flow at each node should be equal. Initial approximation is made to the nodal pressures. The approximation is then successively corrected until the final solution is reached.
Disadvantages
Poor convergence characteristics, the method is extremely sensitive to initial conditions.
Advantages
Does not require extra computation to produce and optimize a set of loops.
Can easily be adapted for optimization tasks.
Newton-loop method
The method is based on the generated loops and the equations are simply mathematical representation of Kirchhoff's second law which states that the sum of the pressure-drops around any loop should be zero. Before using loops method the fundamental set of loops need to be found. Basically the fundamental set of loops can be found by constructing spanning tree for the network. The standard methods for producing spanning tree is based on a breadth-first search or on a depth-first search which are not so efficient for large networks, because the computing time of these methods is proportional to n2, where n is the number of pipes in the network. More efficient method for large networks is the forest method and its computational time is proportional to n*log2n.
The loops that are produced from the spanning tree are not the best set that could be produced. There is often significant overlap between loops with some pipes shared between several loops. This usually slows convergence, therefore the loops' reduction algorithm needs to be applied to minimize the loops overlapping. This is usually performed by replacing the loops in the original fundamental set by smaller loops produced by linear combination of the original set.
Disadvantages
It requires extra computation to produce and optimize a set of loops.
The dimension of the equations to be solved is smaller but they are much less sparse.
Advantages
The main advantage is that the equation can be solved very efficiently with an iterative method that avoids the need of matrix factorization and consequently has a minimal requirement for storage; this makes it very attractive for low pressure networks with a large number of pipes.
Fast convergence which is less sensitive to the initial conditions.
Newton loop-node method
The Newton loop-node method is based on Kirchhoff’s first and second laws. The Newton loop-node method is the combination of the Newton nodal and loop methods and does not solve loop equations explicitly. The loop equations are transformed to an equivalent set of nodal equations, which are then solved to yield the nodal pressures. The nodal pressures are used then to calculate the corrections to the chord flows (which is synonymous to loop flows), and the tree branch flows are obtained from them.
Disadvantages
Since set of nodal equations are solved nodal Jacobi matrix is used which is more sparse then the equivalent loop Jacobi matrix which may have negative impact on computational efficiency and usability.
Advantages
Good convergence characteristics of loop method are maintained.
No need to define and optimize the loops.
Unsteady state analysis
Computer simulation
The importance of the mathematical methods' efficiency arises from the large scale of simulated network. It is required that the computation costs of the simulation method be low, this is related to the computation time and computer storage. At the same time the accuracy of the computed values must acceptable for the particular model.
References
Ekhtiari, A. Dassios, I. Liu, M. Syron, E. A Novel Approach to Model a Gas Network, Appl. Sci. 2019, 9(6), 1047.
Networks
Piping | Gas networks simulation | [
"Chemistry",
"Engineering"
] | 1,788 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
31,631,838 | https://en.wikipedia.org/wiki/Hypergravity | Hypergravity is defined as the condition where the force of gravity (real or perceived) exceeds that on the surface of the Earth. This is expressed as being greater than 1 g. Hypergravity conditions are created on Earth for research on human physiology in aerial combat and space flight, as well as testing of materials and equipment for space missions. Manufacturing of titanium aluminide turbine blades in 20 g is being explored by researchers at the European Space Agency (ESA) via an 8-meter wide Large Diameter Centrifuge (LDC).
Bacteria
NASA scientists looking at meteorite impacts discovered that most strains of bacteria were able to reproduce under acceleration exceeding 7,500 g.
Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g. Another study that has been published in the Proceedings of the National Academy of Sciences, reports that some bacteria can exist even in extreme "hypergravity". In other words, they can still live and breed despite gravitational forces that are 400,000 times greater than what's felt here on Earth.
Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas. Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of existence of exobacteria and panspermia.
Materials
High gravity conditions generated by centrifuge is applied in the chemical industry, casting, and material synthesis. The convection and mass transfer are greatly affected by the gravitational condition. Researchers reported that the high-gravity level can effectively affect the phase composition and morphology of the products.
Research on Humans
See G-force: Human tolerance
Effects on rate of aging of rats
Ever since Pearl proposed the rate of living theory of aging, numerous studies have demonstrated its validity in poikilotherms. In mammals, however, satisfactory experimental demonstration is still lacking because an externally imposed increase of basal metabolic rate of these animals (e.g. by placement in the cold) is usually accompanied by general homeostatic disturbance and stress. The present study was based on the finding that rats exposed to slightly increased gravity are able to adapt with little chronic stress but at a higher level of basal metabolic expenditure (increased 'rate of living'). The rate of aging of 17-month-old rats that had been exposed to 3.14 g in an animal centrifuge for 8 months was larger than of controls as shown by apparently elevated lipofuscin content in heart and kidney, reduced numbers and increased size of mitochondria of heart tissue, and inferior liver mitochondria respiration (reduced 'efficiency': 20% larger ADP: 0 ratio, P less than 0.01; reduced 'speed': 8% lower respiratory control ratio, P less than 0.05). Steady-state food intake per day per kg body weight, which is presumably proportional to 'rate of living' or specific basal metabolic expenditure, was about 18% higher than in controls (P less than 0.01) after an initial 2-month adaptation period. Finally, though half of the centrifuged animals lived only a little shorter than controls (average about 343 vs. 364 days on the centrifuge, difference statistically nonsignificant), the remaining half (longest survivors) lived on the centrifuge an average of 520 days (range 483–572) compared to an average of 574 days (range 502–615) for controls, computed from onset of centrifugation, or 11% shorter (P less than 0.01). Therefore, these results show that a moderate increase of the level of basal metabolism of young adult rats adapted to hypergravity compared to controls in normal gravity is accompanied by a roughly similar increase in the rate of organ aging and reduction of survival, in agreement with Pearl's rate of living theory of aging, previously experimentally demonstrated only in poikilotherms.
Effects on the behavior of adult rats
Pups from gestating rats exposed to hypergravity (1.8 g) or to normal gravity at the perinatal period were evaluated. By comparison to controls, the hypergravity group had shorter latencies before choosing a maze arm in a T-maze and fewer exploratory pokes in a hole board. During dyadic encounters, the hypergravity group had a lower number of self-grooming episodes and shorter latencies before crossing under the opposing rat.
See also
g-force#Human tolerance
References
External links
The Pull of Hypergravity
Gravity
Acceleration
Human spaceflight
Astrobiology | Hypergravity | [
"Physics",
"Astronomy",
"Mathematics",
"Biology"
] | 1,031 | [
"Origin of life",
"Physical quantities",
"Acceleration",
"Speculative evolution",
"Quantity",
"Astrobiology",
"Biological hypotheses",
"Wikipedia categories named after physical quantities",
"Astronomical sub-disciplines"
] |
31,632,219 | https://en.wikipedia.org/wiki/Cyclohexanedimethanol | Cyclohexanedimethanol (CHDM) is a mixture of isomeric organic compounds with formula C6H10(CH2OH)2. It is a colorless low-melting solid used in the production of polyester resins. Commercial samples consist of a mixture of cis and trans isomers. It is a di-substituted derivative of cyclohexane and is classified as a diol, meaning that it has two OH functional groups. Commercial CHDM typically has a cis/trans ratio of 30:70.
Production
CHDM is produced by catalytic hydrogenation of dimethyl terephthalate (DMT). The reaction conducted in two steps beginning with the conversion of DMT to the diester dimethyl 1,4-cyclohexanedicarboxylate (DMCD):
C6H4(CO2CH3)2 + 3 H2 → C6H10(CO2CH3)2
In the second step DMCD is further hydrogenated to CHDM:
C6H10(CO2CH3)2 + 4 H2 → C6H10(CH2OH)2 + 2 CH3OH
A copper chromite catalyst is usually used industrially. The cis/trans ratio of the CHDM is affected by the catalyst.
Byproduct of this process are 4-methylcyclohexanemethanol (CH3C6H10CH2OH) and the monoester methyl
4-methyl-4-cyclohexanecarboxylate (CH3C6H10CO2CH3, CAS registry number 51181-40-9). The leading producers in CHDM are Eastman Chemical in US and SK Chemicals in South Korea.
Applications
Via the process called polycondensation, CHDM is a precursor to polyesters. It is one of the most important comonomers for production of polyethylene terephthalate (PET), or polyethylene terephthalic ester (PETE), from which plastic bottles are made. In addition it maybe spun to form carpet fibers.
Thermoplastic polyesters containing CHDM exhibit enhanced strength, clarity, and solvent resistance. The properties of the polyesters vary from the high melting crystalline poly(1,4-cyclohexylenedimethylene terephthalate), PCT, to the non-crystalline copolyesters derived from both ethylene glycol and CHDM. The properties of these polyesters also is affected by the cis/trans ratio of the CHDM monomer.
CHDM reduces the degree of crystallinity of PET homopolymer, improving its processability. The copolymer tends to resist degradation, e.g. to acetaldehyde. The copolymer with PET is known as glycol-modified polyethylene terephthalate, PETG. PETG is used in many fields, including electronics, automobiles, barrier, and medical, etc.
CHDM is a raw material for the production of 1,4-cyclohexanedimethanol diglycidyl ether, an epoxy diluent. The key use for this diglycidyl ether is to reduce the viscosity of epoxy resins.
References
External links
U.S. National Library of Medicine: Hazardous Substances Databank – 1,4-cyclohexanedimethanol
Monomers
Diols | Cyclohexanedimethanol | [
"Chemistry",
"Materials_science"
] | 736 | [
"Monomers",
"Polymer chemistry"
] |
31,633,580 | https://en.wikipedia.org/wiki/Buff%20strength | Buff strength is a design term used in the certification of rail rolling stock. It refers to the required resistance to deformation or permanent damage due to loads applied at the car's ends, such as in a collision. Particular emphasis on buff strength is placed in the United States, with buff strength requirements there being higher than in Europe.
United States
Buff strength requirements grew out of best-practice design standards during the latter part of the nineteenth century. By the twentieth century, a design limit of was required by federal approval agencies. This was upped to for certain categories in 1945.
Federal requirements for buff strength were set in 1999 at for all passenger-carrying units, unless reduced by waivers or special order. The Federal Static and Strength Regulation (49 Code of Federal Regulations § 238.203) requires that a passenger rail car be able to support a longitudinal static compressive load of without permanent deformation.
There are other strength requirements associated with end-structure design. 49 CFR § 238.211 specifies that the cab ends of locomotives, cab cars, and self-powered multiple-unit cars have lead ends capable of supporting longitudinal force at the top of the underframe, and of force above the top of the underframe.
Europe
There are multiple certifying and approving agencies in Europe, so universal agreement on strength standards is not guaranteed. A 1977 German standard (VÖV 6.030.1/1977) presented values which have been followed by some other countries. The document was revised in 1992 and is presently known as VDV Recommendation 152 - Structural Requirements to Rail Vehicles for Public Mass Transit in Accordance with BOStrab.
In 1995 the European Common Market Committee for Standardization issued a draft document, Structural Requirements of Railway Vehicle Bodies. It mandated differing design loads for vehicles in differing categories, ranging from for tramways to for passenger coaches and locomotives.
In general the European approach to crashworthiness of passenger coaches puts more emphasis on crumple zones rather than buff strength, meaning that the required design loads are less than those in the United States. This is generally agreed to reduce construction costs.
See also
Compressive strength
Container compression test
Crashworthiness
Deformation (engineering)
Headstock (rolling stock)
Railworthiness
References
Rail transport
Deformation (mechanics) | Buff strength | [
"Materials_science",
"Engineering"
] | 453 | [
"Deformation (mechanics)",
"Materials science"
] |
35,841,146 | https://en.wikipedia.org/wiki/Apollonius%20point | In Euclidean geometry, the Apollonius point is a triangle center designated as X(181) in Clark Kimberling's Encyclopedia of Triangle Centers (ETC). It is defined as the point of concurrence of the three line segments joining each vertex of the triangle to the points of tangency formed by the opposing excircle and a larger circle that is tangent to all three excircles.
In the literature, the term "Apollonius points" has also been used to refer to the isodynamic points of a triangle. This usage could also be justified on the ground that the isodynamic points are related to the three Apollonian circles associated with a triangle.
The solution of the Apollonius problem has been known for centuries. But the Apollonius point was first noted in 1987.
Definition
The Apollonius point of a triangle is defined as follows.
Let be any given triangle. Let the excircles of opposite to the vertices be respectively. Let be the circle which touches the three excircles such that the three excircles are within . Let be the points of contact of the circle with the three excircles. The lines are concurrent. The point of concurrence is the Apollonius point of .
The Apollonius problem is the problem of constructing a circle tangent to three given circles in a plane. In general, there are eight circles touching three given circles. The circle referred to in the above definition is one of these eight circles touching the three excircles of triangle . In Encyclopedia of Triangle Centers the circle is the called the Apollonius circle of .
Trilinear coordinates
The trilinear coordinates of the Apollonius point are
See also
Apollonius' theorem
Apollonius of Perga (262–190 BC), geometer and astronomer
Apollonius problem
Apollonian circles
Isodynamic point of a triangle
References
Triangle centers | Apollonius point | [
"Physics",
"Mathematics"
] | 384 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
35,843,260 | https://en.wikipedia.org/wiki/Vicsek%20model | The Vicsek model is a mathematical model used to describe active matter. One motivation of the study of active matter by physicists is the rich phenomenology associated to this field. Collective motion and swarming are among the most studied phenomena. Within the huge number of models that have been developed to catch such behavior from a microscopic description, the most famous is the model introduced by Tamás Vicsek et al. in 1995.
Physicists have a great interest in this model as it is minimal and describes a kind of universality. It consists in point-like self-propelled particles that evolve at constant speed and align their velocity with their neighbours' one in presence of noise. Such a model shows collective motion at high density of particles or low noise on the alignment.
Model (mathematical description)
As this model aims at being minimal, it assumes that flocking is due to the combination of any kind of self propulsion and of effective alignment.Since velocities of each particle is a constant, the net momentum of the system is not conserved during collisions.
An individual is described by its position and the angle defining the direction of its velocity at time . The discrete time evolution of one particle is set by two equations:
At each time step , each agent aligns with its neighbours within a given distance with an uncertainty due to a noise :
The particle then moves at constant speed in the new direction:
In these equations, denotes the average direction of the velocities of particles (including particle ) within a circle of radius surrounding particle . The average normalized velocity acts as the order parameter for this system, and is given by .
The whole model is controlled by three parameters: the density of particles, the amplitude of the noise on the alignment and the ratio of the travel distance to the interaction range . From these two simple iteration rules, various continuous theories have been elaborated such as the Toner Tu theory which describes the system at the hydrodynamic level.
An Enskog-like kinetic theory, which is valid at arbitrary particle density, has been developed. This theory quantitatively describes the formation of steep density waves, also called invasion waves, near the transition to collective motion.
Phenomenology
This model shows a phase transition from a disordered motion to large-scale ordered motion. At large noise or low density, particles are on average not aligned, and they can be described as a disordered gas. At low noise and large density, particles are globally aligned and move in the same direction (collective motion). This state is interpreted as an ordered liquid. The transition between these two phases is not continuous, indeed the phase diagram of the system exhibits a first order phase transition with a microphase separation. In the co-existence region, finite-size liquid bands emerge in a gas environment and move along their transverse direction. Recently, a new phase has been discovered: a polar ordered Cross sea phase of density waves with inherently selected crossing angle. This spontaneous organization of particles epitomizes collective motion.
Extensions
Since its appearance in 1995 this model has been very popular within the physics community; many scientists have worked on and extended it. For example, one can extract several universality classes from simple symmetry arguments concerning the motion of the particles and their alignment.
Moreover, in real systems, many parameters can be included in order to give a more realistic description, for example attraction and repulsion between agents (finite-size particles), chemotaxis (biological systems), memory, non-identical particles, the surrounding liquid.
A simpler theory, the Active Ising model, has been developed to facilitate the analysis of the Vicsek model.
References
Multi-agent systems | Vicsek model | [
"Engineering"
] | 744 | [
"Artificial intelligence engineering",
"Multi-agent systems"
] |
35,847,357 | https://en.wikipedia.org/wiki/The%20TaiWan%20Ionospheric%20Model | The TaiWan Ionospheric Model (TWIM) developed in 2008 is a three-dimensional numerical and phenomenological model of ionospheric electron density (Ne). The TWIM has been constructed from global distributed ionosonde foF2 and foE data and vertical Ne profiles retrieved from FormoSat3/COSMIC GPS radio occultation measurements. The TWIM consists of vertically fitted α-Chapman-type layers, with distinct F2, F1, E, and D layers, for which the layer parameters such as peak density, peak density height, and scale height are represented by surface spherical harmonics. These results are useful for providing reliable radio propagation predictions and in investigation of near-Earth space and large-scale Ne distribution with diurnal and seasonal variations, along with geographic features such as the equatorial anomaly. This way the continuity of Ne and its derivatives is also maintained for practical schemes for providing reliable radio propagation predictions.
References
The information in this article is based on that in its Chinese equivalent.
Atmospheric dispersion modeling
2008 in science | The TaiWan Ionospheric Model | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 217 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
23,036,926 | https://en.wikipedia.org/wiki/Sustainable%20preservation | In historic preservation, sustainable preservation is the idea that preservation has tangible ecological benefits, on the basis that the most sustainable building is one that is already built. Historic buildings can have advantages over new construction with their often central location, historic building materials, and unique characteristics of craftsmanship. Arguing for these connections is at least partially an outgrowth of the green building movement with its emphasis on new construction. Sustainable preservation borrows many of the same principles of sustainable architecture, though is unique by focusing on older buildings versus new construction. The term "sustainable preservation" is also utilized to refer to the preservation of global heritage, archaeological and historic sites through the creation of economically sustainable businesses which support such preservation, such as the Sustainable Preservation Initiative and the Global Heritage Fund.
History
The U.S. Green Building Council (USGBC) has popularized sustainability initiatives since their founding in 1993. Their LEED certification allowed professionals to develop expertise in the field of green building. The LEED Green Building Rating System with benchmarks was established in 2000.
The Association for Preservation Technology International formed a "Sustainable Preservation Committee" in 2004 to provide an arena for discussion, and educate on the relationship between historic preservation. Among early discussions were a workshop in Halifax, held in 2005. This was followed a workshop on "Greenbuild & LEED for Historic Building" in November 2006. The APTI annual conference in Montreal from October 13–17, 2008, also included a symposium on sustainable heritage conservation.
The National Trust for Historic Preservation also included Sustainability among several issues the Trust works on. The Trust's position statement on sustainability is:
Historic preservation can – and should – be an important component of any effort to promote sustainable development. The conservation and improvement of our existing built resources, including re-use of historic and older buildings, greening the existing building stock, and reinvestment in older and historic communities, is crucial to combating climate change.
National Trust President Richard Moe, addressed the USGBC on November 20, 2008. His speech laid out several principles in an effort to find common ground:
Promote a culture of reuse
Reinvest at a Community Scale
Value the Lessons of Heritage Buildings and Communities
Make Use of the Economic Advantages of Reuse, Reinvestment and Retrofits
Re-imagine Historic Preservation Policies and Practices as They Relate to Sustainability
Take Immediate and Decisive Action
The Kresge Foundation led a Green Building Initiative from 2003 to May 29, 2009. The initiative provided planning grants for nonprofit organizations that went on to build green buildings. The foundation also demonstrated their commitment to sustainability initiatives through construction of a green headquarters in Troy, Michigan. This building incorporated a historic building on the site with new construction. These facilities were completed in 2006, and in 2008 received the Platinum-level rating from the USGBC.
Notable Projects
Trinity Church, Boston, Massachusetts
United States Naval Academy, Main Academic Complex, Annapolis, Maryland
Cambridge City Hall Annex, Cambridge, Massachusetts - in 2005 was the oldest building certified under the USGBC LEED for New Construction, earning the Gold Rating
Hudson Area Association Library, Hudson, New York
J.W. McCormack Federal Courthouse and Post Office, Boston, Massachusetts
Villagra Building, Santa Fe, New Mexico
William A. Kerr Building, St. Louis, Missouri
Howard Hall, Athens, New York
References
External links
Association for Preservation Technology International, an organization concerned with technologies used for conserving historic structures and their settings.
National Trust for Historic Preservation, the principal non-profit preservation advocacy organization in the United States.
Standards and Guidelines for Preservation in the United States
Sustainable Preservation Initiative, a charitable organization that preserves the world's cultural heritage through local economic development
Conservation and restoration of cultural heritage
Cultural heritage
Historic preservation
Sustainable building | Sustainable preservation | [
"Engineering"
] | 751 | [
"Building engineering",
"Sustainable building",
"Construction"
] |
23,039,174 | https://en.wikipedia.org/wiki/Eigengap | In linear algebra, the eigengap of a linear operator is the difference between two successive eigenvalues, where eigenvalues are sorted in ascending order.
The Davis–Kahan theorem, named after Chandler Davis and William Kahan, uses the eigengap to show how eigenspaces of an operator change under perturbation. In spectral clustering, the eigengap is often referred to as the spectral gap; although the spectral gap may often be defined in a broader sense than that of the eigengap.
See also
Eigenvalue perturbation
References
Linear algebra | Eigengap | [
"Mathematics"
] | 127 | [
"Linear algebra",
"Algebra"
] |
23,039,694 | https://en.wikipedia.org/wiki/Phenylethylidenehydrazine | Phenylethylidenehydrazine (PEH), also known as 2-phenylethylhydrazone or β-phenylethylidenehydrazine, is a GABA transaminase inhibitor. It is a metabolite of the antidepressant phenelzine and is responsible for its elevation of GABA concentrations. PEH may contribute to phenelzine's anxiolytic effects.
See also
Phenelzine
References
Hydrazones
GABA transaminase inhibitors
Human drug metabolites | Phenylethylidenehydrazine | [
"Chemistry"
] | 116 | [
"Chemicals in medicine",
"Hydrazones",
"Functional groups",
"Human drug metabolites"
] |
23,039,832 | https://en.wikipedia.org/wiki/Melatonin%20receptor%201C | Melatonin receptor 1C, also known as MTNR1C, is a protein that is encoded by the Mtnr1c gene. This receptor has been identified in fish, amphibia, and birds, but not in humans.
References
Further reading
See also
Melatonin receptor
Further reading
Classification of protein families - Melatonin receptor type 1C
Sleep drops with melatonin and lavender in German
Melatonin receptor genes (mel-1a, mel-1b, mel-1c) are differentially expressed in the avian germ line
G protein-coupled receptors
1C | Melatonin receptor 1C | [
"Chemistry"
] | 121 | [
"G protein-coupled receptors",
"Signal transduction"
] |
2,904,472 | https://en.wikipedia.org/wiki/Tetrahedral%20symmetry | A regular tetrahedron has 12 rotational (or orientation-preserving) symmetries, and a symmetry order of 24 including transformations that combine a reflection and a rotation.
The group of all (not necessarily orientation preserving) symmetries is isomorphic to the group S4, the symmetric group of permutations of four objects, since there is exactly one such symmetry for each permutation of the vertices of the tetrahedron. The set of orientation-preserving symmetries forms a group referred to as the alternating subgroup A4 of S4.
Details
Chiral and full (or achiral tetrahedral symmetry and pyritohedral symmetry) are discrete point symmetries (or equivalently, symmetries on the sphere). They are among the crystallographic point groups of the cubic crystal system.
Seen in stereographic projection the edges of the tetrakis hexahedron form 6 circles (or centrally radial lines) in the plane. Each of these 6 circles represent a mirror line in tetrahedral symmetry. The intersection of these circles meet at order 2 and 3 gyration points.
Chiral tetrahedral symmetry
T, 332, [3,3]+, or 23, of order 12 – chiral or rotational tetrahedral symmetry. There are three orthogonal 2-fold rotation axes, like chiral dihedral symmetry D2 or 222, with in addition four 3-fold axes, centered between the three orthogonal directions. This group is isomorphic to A4, the alternating group on 4 elements; in fact it is the group of even permutations of the four 3-fold axes: e, (123), (132), (124), (142), (134), (143), (234), (243), (12)(34), (13)(24), (14)(23).
The conjugacy classes of T are:
identity
4 × rotation by 120° clockwise (seen from a vertex): (234), (143), (412), (321)
4 × rotation by 120° counterclockwise (ditto)
3 × rotation by 180°
The rotations by 180°, together with the identity, form a normal subgroup of type Dih2, with quotient group of type Z3. The three elements of the latter are the identity, "clockwise rotation", and "anti-clockwise rotation", corresponding to permutations of the three orthogonal 2-fold axes, preserving orientation.
A4 is the smallest group demonstrating that the converse of Lagrange's theorem is not true in general: given a finite group G and a divisor d of |G|, there does not necessarily exist a subgroup of G with order d: the group has no subgroup of order 6. Although it is a property for the abstract group in general, it is clear from the isometry group of chiral tetrahedral symmetry: because of the chirality the subgroup would have to be C6 or D3, but neither applies.
Subgroups of chiral tetrahedral symmetry
Achiral tetrahedral symmetry
Td, *332, [3,3] or 3m, of order 24 – achiral or full tetrahedral symmetry, also known as the (2,3,3) triangle group. This group has the same rotation axes as T, but with six mirror planes, each through two 3-fold axes. The 2-fold axes are now S4 () axes. Td and O are isomorphic as abstract groups: they both correspond to S4, the symmetric group on 4 objects. Td is the union of T and the set obtained by combining each element of with inversion. See also the isometries of the regular tetrahedron.
The conjugacy classes of Td are:
identity
8 × rotation by 120° (C3)
3 × rotation by 180° (C2)
6 × reflection in a plane through two rotation axes (Cs)
6 × rotoreflection by 90° (S4)
Subgroups of achiral tetrahedral symmetry
Pyritohedral symmetry
Th, 3*2, [4,3+] or m, of order 24 – pyritohedral symmetry. This group has the same rotation axes as T, with mirror planes through two of the orthogonal directions. The 3-fold axes are now S6 () axes, and there is a central inversion symmetry. Th is isomorphic to : every element of Th is either an element of T, or one combined with inversion. Apart from these two normal subgroups, there is also a normal subgroup D2h (that of a cuboid), of type . It is the direct product of the normal subgroup of T (see above) with Ci. The quotient group is the same as above: of type Z3. The three elements of the latter are the identity, "clockwise rotation", and "anti-clockwise rotation", corresponding to permutations of the three orthogonal 2-fold axes, preserving orientation.
It is the symmetry of a cube with on each face a line segment dividing the face into two equal rectangles, such that the line segments of adjacent faces do not meet at the edge. The symmetries correspond to the even permutations of the body diagonals and the same combined with inversion. It is also the symmetry of a pyritohedron, which is extremely similar to the cube described, with each rectangle replaced by a pentagon with one symmetry axis and 4 equal sides and 1 different side (the one corresponding to the line segment dividing the cube's face); i.e., the cube's faces bulge out at the dividing line and become narrower there. It is a subgroup of the full icosahedral symmetry group (as isometry group, not just as abstract group), with 4 of the 10 3-fold axes.
The conjugacy classes of Th include those of T, with the two classes of 4 combined, and each with inversion:
identity
8 × rotation by 120° (C3)
3 × rotation by 180° (C2)
inversion (S2)
8 × rotoreflection by 60° (S6)
3 × reflection in a plane (Cs)
Subgroups of pyritohedral symmetry
Solids with chiral tetrahedral symmetry
The Icosahedron colored as a snub tetrahedron has chiral symmetry.
Solids with full tetrahedral symmetry
See also
Octahedral symmetry
Icosahedral symmetry
Binary tetrahedral group
Citations
References
Peter R. Cromwell, Polyhedra (1997), p. 295
The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss,
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
N.W. Johnson: Geometries and Transformations, (2018) Chapter 11: Finite symmetry groups, 11.5 Spherical Coxeter groups
External links
Finite groups
Rotational symmetry
Tetrahedra | Tetrahedral symmetry | [
"Physics",
"Mathematics"
] | 1,478 | [
"Mathematical structures",
"Finite groups",
"Algebraic structures",
"Symmetry",
"Rotational symmetry"
] |
2,904,516 | https://en.wikipedia.org/wiki/Reactor%20pressure%20vessel | A reactor pressure vessel (RPV) in a nuclear power plant is the pressure vessel containing the nuclear reactor coolant, core shroud, and the reactor core.
Classification of nuclear power reactors
Russian Soviet era RBMK reactors have each fuel assembly enclosed in an individual 8 cm diameter pipe rather than having a pressure vessel. Whilst most power reactors do have a pressure vessel, they are generally classified by the type of coolant rather than by the configuration of the vessel used to contain the coolant. The classifications are:
Light-water reactor - Includes the pressurized water reactor and the boiling water reactor. Most nuclear power reactors are of this type.
Graphite-moderated reactor - Includes the Chernobyl reactor (RBMK), which has a highly unusual reactor configuration compared to the vast majority of civilian nuclear power plants in Russia and around the world.
Gas cooled thermal reactor - Includes the Advanced Gas-cooled Reactor, the gas cooled fast breeder reactor, and the high temperature gas cooled reactor. An example of a gas cooled reactor is the British Magnox.
Pressurized heavy-water reactor - utilizes heavy water, or water with a higher than normal proportion of the hydrogen isotope deuterium, in some manner. However, D2O (heavy water) is more expensive and may be used as a main component, but not necessarily as a coolant in this case. An example of a heavy water reactor is Canada's CANDU reactor.
Liquid metal cooled reactor - utilizes a liquid metal, such as sodium or a lead-bismuth alloy to cool the reactor core.
Molten salt reactor - salts, typically fluorides of the alkali metals and of the alkali earth metals, are used as the coolant. Operation is similar to metal-cooled reactors with high temperatures and low pressures, reducing pressure exerted on the reactor vessel versus water or steam-cooled designs.
Of the main classes of reactor with a pressure vessel, the pressurized water reactor is unique in that the pressure vessel suffers significant neutron irradiation (called fluence) during operation, and may become brittle over time as a result. In particular, the larger pressure vessel of the boiling water reactor is better shielded from the neutron flux, so although more expensive to manufacture in the first place because of this extra size, it has an advantage in not needing annealing to extend its life.
Annealing of pressurized water reactor vessels to extend their working life is a complex and high-value technology being actively developed by both nuclear service providers (AREVA) and operators of pressurized water reactors.
Components of a pressurized water reactor pressure vessel
All pressurized water reactor pressure vessels share some features regardless of the particular design.
Reactor vessel body
The reactor vessel body is the largest component and is designed to contain the fuel assembly, coolant, and fittings to support coolant flow and support structures. It is usually cylindrical in shape and is open at the top to allow the fuel to be loaded.
Reactor vessel head
This structure is attached to the top of the reactor vessel body. It contains penetrations to allow the control rod driving mechanism to attach to the control rods in the fuel assembly. The coolant level measurement probe also enters the vessel through the reactor vessel head.
Fuel assembly
The fuel assembly of nuclear fuel usually consisting of uranium or uranium–plutonium mixes. It is usually a rectangular block of gridded fuel rods.
Neutron reflector or absorber
Protecting the inside of the vessel from fast neutrons escaping from the fuel assembly is a cylindrical shield wrapped around the fuel assembly. Reflectors send the neutrons back into the fuel assembly to better utilize the fuel. The main purpose though is to protect the vessel from fast neutron induced damage that can make the vessel brittle and reduce its useful life.
Materials
The RPV provides a critical role in safety of the PWR reactor and the materials used must be able to contain the reactor core at elevated temperatures and pressures. The materials used in the cylindrical shell of the vessels have evolved over time, but in general they consist of low-alloy ferritic steels clad with 3–10 mm of austenitic stainless steel. The stainless steel cladding is primarily used in locations that come into contact with coolant in order to minimize corrosion. Through the mid-1960, SA-302, Grade B, a molybdenum-manganese plate steel, was used in the body of the vessel. As changing designs required larger pressure vessels, the addition of nickel to this alloy by roughly 0.4-0.7 wt% was required to increase the yield strength. Other common steel alloys include SA-533 Grade B Class 1 and SA-508 Class 2. Both materials have main alloying elements of nickel, manganese, molybdenum, and silicon, but the latter also includes 0.25-0.45 wt% chromium. All alloys listed in the reference also have >0.04 wt% sulfur.
Low-alloyed NiMoMn ferritic steels are attractive for this purpose due to their high thermal conductivity and low thermal expansion, properties that make them resistant to thermal shock. However, when considering the properties of these steels, one must take into account the response it will have to radiation damage. Due to harsh conditions, the RPV cylinder shell material is often the lifetime-limiting component for a nuclear reactor. Understanding the effects radiation has on the microstructure in addition to the physical and mechanical properties will allow scientists to design alloys more resistant to radiation damage.
In 2018 Rosatom announced it had developed a thermal annealing technique for RPVs which ameliorates radiation damage and extends service life by between 15 and 30 years. This had been demonstrated on unit 1 of the Balakovo Nuclear Power Plant.
Radiation damage in metals and alloys
Due to the nature of nuclear energy generation, the materials used in the RPV are constantly bombarded by high-energy particles. These particles can either be neutrons or fragments of an atom created by a fission event. When one of these particles collides with an atom in the material, it will transfer some of its kinetic energy and knock the atom out of its position in the lattice. When this happens, this primary "knock-on" atom (PKA) that was displaced and the energetic particle may rebound and collide with other atoms in the lattice. This creates a chain reaction that can cause many atoms to be displaced from their original positions. This atomic movement leads to the creation of many types of defects.
The accumulation of various defects can cause microstructural changes that can lead to a degradation in macroscopic properties. As previously mentioned, the chain reaction caused by a PKA often leaves a trail of vacancies and clusters of defects at the edge. This is called a displacement cascade. The vacancy-rich core of a displacement cascade can also collapse into dislocation loops. Due to irradiation, materials tend to develop a higher concentration of defects than is present in typical steels, and the high temperatures of operation induce migration of the defects. This can cause things like recombination of interstitials and vacancies and clustering of like defects, which can either create or dissolve precipitates or voids. Examples of sinks, or thermodynamically favorable places for defects to migrate to, are grain boundaries, voids, incoherent precipitates, and dislocations.
Radiation-induced segregation
Interactions between defects and alloying elements can cause a redistribution of atoms at sinks such as grain boundaries. The physical effect that can occur is that certain elements will be enriched or depleted in these areas, which often leads to embrittlement of grain boundaries or other detrimental property changes. This is because there is a flux of vacancies towards a sink and a flux of atoms away or toward the sink that may have varying diffusion coefficients. The uneven rates of diffusion cause a concentration of atoms that will not necessarily be in the correct alloy proportions. It has been reported that nickel, copper and silicon tend to be enriched at sinks, whereas chromium tends to be depleted. The resulting physical effect is changing chemical composition at grain boundaries or around voids/incoherent precipitates, which also serve as sinks.
Formation of voids and bubbles
Voids form due to a clustering of vacancies and generally form more readily at higher temperatures. Bubbles are simply voids filled with gas; they will occur if transmutation reactions are present, meaning a gas is formed due to the breakdown of an atom caused by neutron bombardment. The biggest issue with voids and bubbles is dimensional instability. An example of where this would be very problematic is areas with tight dimensional tolerances, such as threads on a fastener.
Irradiation hardening
The creation of defects such as voids or bubbles, precipitates, dislocation loops or lines, and defect clusters can strengthen a material because they block dislocation motion. The movement of dislocations is what leads to plastic deformation. While this hardens the material, the downside is that there is a loss of ductility. Losing ductility, or increasing brittleness, is dangerous in RPVs because it can lead to catastrophic failure without warning. When ductile materials fail, there is substantial deformation before failure, which can be monitored. Brittle materials will crack and explode when under pressure without much prior deformation, so there is not much engineers can do to detect when the material is about to fail. A particularly damaging element in steels that can lead to hardening or embrittlement is copper. Cu-rich precipitates are very small (1-3 nm) so they are effective at pinning dislocations. It has been recognized that copper is the dominant detrimental element in steels used for RPVs, especially if the impurity level is greater than 0.1 wt%. Thus, the development of "clean" steels, or ones with very low impurity levels, is important in reducing radiation-induced hardening.
Creep
Creep occurs when a material is held under levels of stress below their yield stress that causes plastic deformation over time. This is especially prevalent when a material is exposed to high stresses at elevated temperatures, because diffusion and dislocation motion occur more rapidly. Irradiation can cause creep due to the interaction between stress and the development of the microstructure. In this case, the increase in diffusivities due to high temperatures is not a very strong factor for causing creep. The dimensions of the material are likely to increase in the direction of the applied stress due to the creation of dislocation loops around defects that formed due to radiation damage. Furthermore, applied stress can allow interstitials to be more readily absorbed in dislocation, which assists in dislocation climb. When dislocations are able to climb, excess vacancies are left, which can also lead to swelling.
Irradiation assisted stress corrosion cracking
Due to the embrittlement of grain boundaries or other defects that can serve as crack initiators, the addition of radiation attack at cracks can cause intergranular stress corrosion cracking. The main environmental stressor that forms due to radiation is hydrogen embrittlement at crack tips. Hydrogen ions are created when radiation splits water molecules, which is present because water is the coolant in PWRs, into OH− and H+. There are several suspected mechanisms that explain hydrogen embrittlement, three of which are the decohesion mechanism, the pressure theory, and the hydrogen attack method. In the decohesion mechanism, it is thought that the accumulation of hydrogen ions reduces the metal-to-metal bond strength, which makes it easier to cleave atoms apart. The pressure theory is the idea that hydrogen can precipitate as a gas at internal defects and create bubbles within the material. The stress caused by the expanding bubble in addition to the applied stress is what lowers the overall stress required to fracture the material. The hydrogen attack method is similar to the pressure theory, but in this case it is suspected that the hydrogen reacts with carbon in the steel to form methane, which then forms blisters and bubbles at the surface. In this case, the added stress by the bubbles is enhanced by the decarburization of the steel, which weakens the metal. In addition to hydrogen embrittlement, radiation induced creep can cause the grain boundaries to slide against each other. This destabilizes the grain boundaries even further, making it easier for a crack to propagate along its length.
Designing radiation-resistant materials for reactor pressure vessels
Very aggressive environments require novel materials approaches in order to combat declines in mechanical properties over time. One method researchers have sought to use is introducing features to stabilize displaced atoms. This can be done by adding grain boundaries, oversized solutes, or small oxide dispersants to minimize defect movement. By doing this, there would be less radiation-induced segregation of elements, which would in turn lead to more ductile grain boundaries and less intergranular stress corrosion cracking. Blocking dislocation and defect movement would also help to increase the resistance to radiation assisted creep. Attempts have been reported of instituting yttrium oxides to block dislocation motion, but it was found that technological implementation posed a greater challenge than expected. Further research is required to continue improving the radiation damage resistance of structural materials used in nuclear power plants.
Manufacturers
Because of the extreme requirements needed to build large state-of-the-art reactor pressure vessels and the limited market, there are only a handful of manufacturers in the world including:
China's First Heavy Industries, Erzhong Group, Harbin Electric and Shanghai Electric.
France's Framatome (former Areva)
Indian conglomerate Larsen & Toubro's subsidiary L&T Special Steels and Heavy Forgings Limited in partnership with Bhabha Atomic Research Centre and NPCIL
Japan's Japan Steel Works and IHI Corporation (in joint venture with Toshiba, former)
Russia's United Heavy Machinery (OMZ-Izhora), ZiO-Podolsk and AEM-Atommash Volgodonsk.
South Korea's Doosan Group.
United Kingdom: Rolls-Royce plc produces reactors for Royal Navy submarines.
See also
Nuclear physics
Nuclear reactor
Nuclear reactor physics
Nuclear reactor vessels
Radiation damage
References
External links
Nuclear power plant components
+
Pressure vessels
+
Nuclear reactor safety | Reactor pressure vessel | [
"Physics",
"Chemistry",
"Engineering"
] | 2,958 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Pressure vessels"
] |
2,904,877 | https://en.wikipedia.org/wiki/Shearing%20interferometer | The shearing interferometer is an extremely simple means to observe interference and to use this phenomenon to test the collimation of light beams, especially from laser sources which have a coherence length which is usually significantly longer than the thickness of the shear plate (see graphics) so that the basic condition for interference is fulfilled.
Function
The testing device consists of a high-quality optical glass, like N-BK7, with extremely flat optical surfaces that are usually at a slight angle to each other. When a plane wave is incident at an angle of 45°, which gives maximum sensitivity, it is reflected two times. The two reflections are laterally separated due to the finite thickness of the plate and by the wedge. This separation is referred to as the shear and has given the instrument its name. The shear can also be produced by gratings.
Parallel-sided shear plates are sometimes used, but the interpretation of the interference fringes of wedged plates is relatively easy and straightforward. Wedged shear plates produce a graded path difference between the front and back surface reflections; as a consequence, a parallel beam of light produces a linear fringe pattern within the overlap.
With a plane wavefront incident, the overlap of the two reflected beams shows interference fringes with a spacing of , where is the spacing perpendicular to shear, is the wavelength of the beam, n the refractive index, and the wedge angle. This equation makes the simplification that the distance from the wedged shear plate to the observation plane is small relative to the wavefront radius of curvature at the observation plane. The fringes are equally spaced and will be exactly perpendicular to the wedge orientation and parallel to a usually present wire cursor aligned along the beam axis in the shearing interferometer. The orientation of the fringes varies when the beam is not perfectly collimated. In the case of a noncollimated beam incident on a wedged shear plate, the path difference between the two reflected wavefronts is increased or decreased from the case of perfect collimation depending on the sign of the curvature. The pattern is then rotated and the beam's wavefront radius of curvature can be calculated: , with the shear distance, the fringe distance, the wavelength and the angular deviation of the fringe alignment from that of perfect collimation. If the spacing normal to the fringes is used instead, this equation becomes , where is the fringe spacing normal to the fringes.
See also
List of types of interferometers
Air-wedge shearing interferometer
Spectral phase interferometry for direct electric-field reconstruction, a type of spectral shearing interferometry, which is similar in concept to the one in the present article, except that the shearing is performed in the frequency domain instead of laterally.
References
External links
University of Erlangen — Optical Design and Microptics
Interferometers | Shearing interferometer | [
"Technology",
"Engineering"
] | 590 | [
"Interferometers",
"Measuring instruments"
] |
2,905,823 | https://en.wikipedia.org/wiki/Packaging%20gas | A packaging gas is used to pack sensitive materials such as food into a modified atmosphere environment. The gas used is usually inert, or of a nature that protects the integrity of the packaged goods, inhibiting unwanted chemical reactions such as food spoilage or oxidation. Some may also serve as a propellant for aerosol sprays like cans of whipped cream. For packaging food, the use of various gases is approved by regulatory organisations.
Their E numbers are included in the following lists in parentheses.
Inert and Nonreactive gases
These gas types do not cause a chemical change to the substance that they protect.
argon (E938), a inert gas used for canned products
helium (E939), a inert gas used for canned products
nitrogen (E941), a nonreactive packaging gas and propellant
carbon dioxide (E290), a nonreactive packaging gas and propellant
Propellant gases
Specific kinds of packaging gases are aerosol propellants. These process and assist the ejection of the product from its container.
chlorofluorocarbons known as CFC (E940 and E945), now rarely used because of the damage that they do to the ozone layer:
dichlorodifluoromethane (E940)
chloropentafluoroethane (E945)
nitrous oxide (E942), used for aerosol whipped cream canisters (see Nitrous oxide: Aerosol propellant)
octafluorocyclobutane (E946)
Reactive gases
These must be used with caution as they may have adverse effects when exposed to certain chemicals. They will cause oxidisation or contamination to certain types of materials.
oxygen (E948), used e.g. for packaging of vegetables
hydrogen (E949)
Volatile gases
Hydrocarbon gases approved for use with food need to be used with extreme caution as they are highly combustible, when combined with oxygen they burn very rapidly and may cause explosions in confined spaces. Special precautions must be taken when transporting these gases.
butane (E943a)
isobutane (E943b)
propane (E944)
See also
Shielding gas
References
Food additives
Food science
Hydrogen technologies
Packaging
Industrial gases | Packaging gas | [
"Chemistry"
] | 473 | [
"Chemical process engineering",
"Industrial gases"
] |
2,907,387 | https://en.wikipedia.org/wiki/Signed%20zero | Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 (negative zero) and +0 (positive zero), regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign-magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0.
The IEEE 754 standard for floating-point arithmetic (presently used by most computers and programming languages that support floating-point numbers) requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of the extended real number line such that 1/−0 = −∞ and 1/+0 = +∞; division is undefined only for ±0/±0 and ±∞/±∞.
Negatively signed zero echoes the mathematical analysis concept of approaching 0 from below as a one-sided limit, which may be denoted by x → 0−, x → 0−, or x → ↑0. The notation "−0" may be used informally to denote a negative number that has been rounded to zero. The concept of negative zero also has some theoretical applications in statistical mechanics and other disciplines.
It is claimed that the inclusion of signed zero in IEEE 754 makes it much easier to achieve numerical accuracy in some critical problems, in particular when computing with complex elementary functions. On the other hand, the concept of signed zero runs contrary to the usual assumption made in mathematics that negative zero is the same value as zero. Representations that allow negative zero can be a source of errors in programs, if software developers do not take into account that while the two zero representations behave as equal under numeric comparisons, they yield different results in some operations.
Representations
Binary integer formats can use various encodings. In the widely used two's complement encoding, zero is unsigned. In a 1+7-bit sign-and-magnitude representation for integers, negative zero is represented by the bit string . In an 8-bit ones' complement representation, negative zero is represented by the bit string . In all these three encodings, positive or unsigned zero is represented by . However, the latter two encodings (with a signed zero) are uncommon for integer formats. The most common formats with a signed zero are floating-point formats (IEEE 754 formats or similar), described below.
In IEEE 754 binary floating-point formats, zero values are represented by the biased exponent and significand both being zero. Negative zero has the sign bit set to one. One may obtain negative zero as the result of certain computations, for instance as the result of arithmetic underflow on a negative number (other results may also be possible), or −1.0×0.0, or simply as −0.0.
In IEEE 754 decimal floating-point formats, a negative zero is represented by an exponent being any valid exponent in the range for the format, the true significand being zero, and the sign bit being one.
Properties and handling
The IEEE 754 floating-point standard specifies the behavior of positive zero and negative zero under various operations. The outcome may depend on the current IEEE rounding mode settings.
Notation
In systems that include both signed and unsigned zeros, the notation and is sometimes used for signed zeros.
Arithmetic
Addition and multiplication are commutative, but there are some special rules that have to be followed, which mean the usual mathematical rules for algebraic simplification may not apply. The sign below shows the obtained floating-point results (it is not the usual equality operator).
The usual rule for signs is always followed when multiplying or dividing:
(for different from ±∞)
(for different from 0)
There are special rules for adding or subtracting signed zero:
(for different from 0)
(for any finite , −0 when rounding toward negative)
Because of negative zero (and also when the rounding mode is upward or downward), the expressions and , for floating-point variables x and y, cannot be replaced by . However can be replaced by x with rounding to nearest (except when x can be a signaling NaN).
Some other special rules:
(follows the sign rule for division)
(for non-zero , follows the sign rule for division)
(Not a Number or interrupt for indeterminate form)
Division of a non-zero number by zero sets the divide by zero flag, and an operation producing a NaN sets the invalid operation flag. An exception handler is called if enabled for the corresponding flag.
Comparisons
According to the IEEE 754 standard, negative zero and positive zero should compare as equal with the usual (numerical) comparison operators, like the == operators of C and Java. In those languages, special programming tricks may be needed to distinguish the two values:
Type punning the number to an integer type, so as to look at the sign bit in the bit pattern;
using the ISO C copysign() function (IEEE 754 copySign operation) to copy the sign of the zero to some non-zero number;
using the ISO C signbit() macro (IEEE 754 isSignMinus operation) that returns whether the sign bit of a number is set;
taking the reciprocal of the zero to obtain either 1/(+0) = +∞ or 1/(−0) = −∞ (if the division by zero exception is not trapped).
Note: Casting to integral type will not always work, especially on two's complement systems.
However, some programming languages may provide alternative comparison operators that do distinguish the two zeros. This is the case, for example, of the method in Java's Double wrapper class.
In rounded values such as temperatures
Informally, one may use the notation "−0" for a negative value that was rounded to zero. This notation may be useful when a negative sign is significant; for example, when tabulating Celsius temperatures, where a negative sign means below freezing.
In statistical mechanics
In statistical mechanics, one sometimes uses negative temperatures to describe systems with population inversion, which can be considered to have a temperature greater than positive infinity, because the coefficient of energy in the population distribution function is −1/Temperature. In this context, a temperature of −0 is a (theoretical) temperature larger than any other negative temperature, corresponding to the (theoretical) maximum conceivable extent of population inversion, the opposite extreme to +0.
See also
Line with two origins
Extended real number line
References
a decimal floating-point specification that includes negative zero
Further reading
the changes in the Fortran SIGN function in Fortran 95 to accommodate negative zero
JScript's floating-point type with negative zero by definition
representation of negative zero in the Java virtual machine
how to handle negative zero when comparing floating-point numbers
one's complement numbers on the UNIVAC 1100 family computers
Computer arithmetic
0 (number) | Signed zero | [
"Mathematics"
] | 1,463 | [
"Computer arithmetic",
"Arithmetic"
] |
2,908,108 | https://en.wikipedia.org/wiki/Podophyllotoxin | Podophyllotoxin (PPT) is the active ingredient in Podofilox, a medical cream used to treat genital warts and molluscum contagiosum. It is not recommended for HPV infections without external warts. It can be applied either by a healthcare provider or the patient themselves.
Podophyllotoxin is a non-alkaloid lignin extracted from the roots and rhizomes of plants of the genus Podophyllum. A less refined form known as podophyllum resin is also available, but has greater side effects.
Podophyllotoxin was first isolated in pure form in 1880 by Valerian Podwyssotzki (1818 – 28 January 1892), a Polish-Russian privatdozent at the University of Dorpat (now Tartu, Estonia) and assistant at the Pharmacological Institute there.
PPT is on the World Health Organization's List of Essential Medicines.
Medical uses
Podophyllotoxin possesses a large number of medical applications, as it inhibits replication of both cellular and viral DNA by binding necessary enzymes. It can additionally destabilize microtubules and prevent cell division. Because of these interactions it is considered an antimitotic drug. It has been employed in the treatment of a wide range of medical conditions, from infections to cancer, although modern medicine typically relies on less orally toxic derivatives when antimitotic effects are desired.
Podophyllotoxin cream is commonly prescribed as a potent topical antiviral. It is used for the treatment of human papillomavirus (HPV) infections with external warts as well as molluscum contagiosum infections. 0.5% PPT cream is prescribed for twice daily applications for 3 days followed by 4 days with no application; this weekly cycle is repeated for four weeks. PPT can also be prescribed as a gel, as opposed to a cream, and is also sold under the names condyline and warticon.
Adverse effects
The most common side effects of podophyllotoxin cream are typically limited to irritation of tissue surrounding the application site, usually burning, redness, pain, itching, and swelling. Application is sometimes immediately followed by burning or itching. Small sores, itching, and peeling skin may also follow. For these reasons it is recommended that application be done in a way that limits contact with surrounding uninfected tissue.
Neither podophyllin resin nor podophyllotoxin lotions or gels are used during pregnancy because these medications have been shown to be embryotoxic in both mice and rats. Antimitotic agents are in general not typically recommended during pregnancy. Additionally, it has not been determined if podophyllotoxin can pass into breast milk from topical applications and therefore it is not recommended for breastfeeding women.
Though podophyllotoxin cream is safe for topical use, it can cause CNS depression as well as enteritis if ingested. The podophyllum resin from which podophyllotoxin is derived has the same effect.
Mechanism of action
Podophyllotoxin destabilizes microtubules by binding tubulin and thus preventing cell division. In contrast, some of its derivatives display binding activity to the enzyme topoisomerase II (Topo II) during the late S and early G2 stages. For instance, etoposide binds and stabilizes the temporary DNA break caused by the enzyme, disrupts the reparation of the break through which the double-stranded DNA passes, and consequently prevents DNA unwinding, which is necessary for subsequent replication. Mutants resistant to either podophyllotoxin or to its topoisomerase II-inhibitory derivatives such as etoposide (VP-16)p have been described in Chinese hamster cells. The mutually exclusive cross-resistance patterns of these mutants provide a highly specific means to distinguish the two kinds of podophyllotoxin derivatives. Mutant Chinese hamster cells resistant to podophyllotoxin are affected in a protein P1 that was later identified as the mammalian HSP60 or chaperonin protein.
Furthermore, podophyllotoxin is classified as an aryltetralin lignan for its ability to bind and deactivate DNA. It and its derivates bind Topo II and prevent its ability to catalyze the rejoining of DNA which has been broken for replication. Lastly, experimental evidence has shown that these aryltetralin lignans can interact with cellular factors to create chemical DNA adducts, thus further deactivating DNA.
Chemistry
Structural characteristic
The structure of podophyllotoxin was first elucidated in the 1930s. Podophyllotoxin bears four consecutive chiral centers, labelled C-1 through C-4 in the following image. The molecule also contains four almost planar fused rings. The podophyllotoxin molecule includes a number of oxygen-containing functional groups: an alcohol, a lactone, three methoxy groups, and an acetal.
Derivatives of podophyllotoxin are synthesized as properties of the rings and carbon 1 through 4 are diversified. For example, ring A is not essential to antimitotic activity. Aromatization of ring C leads to loss of activity, possibly from ring E no longer being placed on the axial position. In addition, the stereochemistry at C-2 and C-3 configures a trans-lactone, which has more activity than the cis counterpart. Chirality at C-1 is also important as it implies an axial position for ring E.
Biosynthesis
The biosynthetic route of podophyllotoxin was not completely eludicidated for many years; however, in September 2015, the identity of the six missing enzymes in podophyllotoxin biosynthesis were reported for the first time. Several prior studies have suggested a common pathway starting from coniferyl alcohol being converted to (+)-pinoresinol in the presence of a one-electron oxidant through dimerization of stereospecific radical intermediate. Pinoresinol is subsequently reduced in the presence of co-factor NADPH to first lariciresinol, and ultimately secoisolariciresinol. Lactonization on secoisolariciresinol gives rise to matairesinol. Secoisolariciresinol is assumed to be converted to yatein through appropriate quinomethane intermediates, leading to podophyllotoxin.
A sequence of enzymes involved has been reported to be dirigent protein (DIR), to convert coniferyl alcohol to (+)-pinocresol, which is converted by pinocresol-lariciresinol reductase (PLR) to (-)-secoisolariciresinol, which is converted by sericoisolariciresinol dehydrogenase (SDH) to (-)-matairesinol, which is converted by CYP719A23 to (-)-pluviatolide, which is likely converted by Phex13114 (OMT1) to (-)-yatein, which is converted by Phex30848 (2-ODD) to (-)-deoxypodophyllotoxin. Though not proceeding through the last step of producing podophyllotoxin itself, a combination of six genes from the mayapple enabled production of the etoposide aglycone in tobacco plants.
Chemical synthesis
Podophyllotoxin has been successfully synthesized in a laboratory; however, synthesis mechanisms require many steps, resulting in a low overall yield. It therefore remains more efficient to obtain podophyllotoxin from natural sources.
Four routes have been used to synthesize podophyllotoxin with varying success: an oxo ester route, lactonization of a dihydroxy acid, cyclization of a conjugate addition product, and a Diels-Alder reaction.
Derivatives
Podophyllotoxin and its derivatives are used as cathartic, purgative, antiviral agent, vesicant, antihelminthic, and antitumor agents. Podophyllotoxin derived antitumor agents include etoposide and teniposide. These drugs have been successfully used in therapy against numerous cancers including testicular, breast, pancreatic, lung, stomach, and ovarian cancers.
Natural abundance
Podophyllotoxin is present at concentrations of 0.3% to 1.0% by mass in the rhizome of the American mayapple (Podophyllum peltatum). Another common source is the rhizome of Sinopodophyllum hexandrum Royle (Berberidaceae).
It is biosynthesized from two molecules of coniferyl alcohol by phenolic oxidative coupling and a series of oxidations, reductions and methylations.
References
Further reading
Berberidaceae
Antiviral drugs
Lignans
Phenol ethers
Gamma-lactones
Furonaphthodioxoles
Plant toxins | Podophyllotoxin | [
"Chemistry",
"Biology"
] | 1,915 | [
"Antiviral drugs",
"Biocides",
"Chemical ecology",
"Plant toxins"
] |
2,908,224 | https://en.wikipedia.org/wiki/Klein%20geometry | In mathematics, a Klein geometry is a type of geometry motivated by Felix Klein in his influential Erlangen program. More specifically, it is a homogeneous space X together with a transitive action on X by a Lie group G, which acts as the symmetry group of the geometry.
For background and motivation see the article on the Erlangen program.
Formal definition
A Klein geometry is a pair where G is a Lie group and H is a closed Lie subgroup of G such that the (left) coset space G/H is connected. The group G is called the principal group of the geometry and G/H is called the space of the geometry (or, by an abuse of terminology, simply the Klein geometry). The space of a Klein geometry is a smooth manifold of dimension
dim X = dim G − dim H.
There is a natural smooth left action of G on X given by
Clearly, this action is transitive (take ), so that one may then regard X as a homogeneous space for the action of G. The stabilizer of the identity coset is precisely the group H.
Given any connected smooth manifold X and a smooth transitive action by a Lie group G on X, we can construct an associated Klein geometry by fixing a basepoint x0 in X and letting H be the stabilizer subgroup of x0 in G. The group H is necessarily a closed subgroup of G and X is naturally diffeomorphic to G/H.
Two Klein geometries and are geometrically isomorphic if there is a Lie group isomorphism so that . In particular, if φ is conjugation by an element , we see that and are isomorphic. The Klein geometry associated to a homogeneous space X is then unique up to isomorphism (i.e. it is independent of the chosen basepoint x0).
Bundle description
Given a Lie group G and closed subgroup H, there is natural right action of H on G given by right multiplication. This action is both free and proper. The orbits are simply the left cosets of H in G. One concludes that G has the structure of a smooth principal H-bundle over the left coset space G/H:
Types of Klein geometries
Effective geometries
The action of G on need not be effective. The kernel of a Klein geometry is defined to be the kernel of the action of G on X. It is given by
The kernel K may also be described as the core of H in G (i.e. the largest subgroup of H that is normal in G). It is the group generated by all the normal subgroups of G that lie in H.
A Klein geometry is said to be effective if and locally effective if K is discrete. If is a Klein geometry with kernel K, then is an effective Klein geometry canonically associated to .
Geometrically oriented geometries
A Klein geometry is geometrically oriented if G is connected. (This does not imply that G/H is an oriented manifold). If H is connected it follows that G is also connected (this is because G/H is assumed to be connected, and is a fibration).
Given any Klein geometry , there is a geometrically oriented geometry canonically associated to with the same base space G/H. This is the geometry where G0 is the identity component of G. Note that .
Reductive geometries
A Klein geometry is said to be reductive and G/H a reductive homogeneous space if the Lie algebra of H has an H-invariant complement in .
Examples
In the following table, there is a description of the classical geometries, modeled as Klein geometries.
References
Differential geometry
Lie groups
Homogeneous spaces | Klein geometry | [
"Physics",
"Mathematics"
] | 751 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Geometry",
"Symmetry"
] |
2,908,558 | https://en.wikipedia.org/wiki/Capacitor%20types | Capacitors are manufactured in many styles, forms, dimensions, and from a large variety of materials. They all contain at least two electrical conductors, called plates, separated by an insulating layer (dielectric). Capacitors are widely used as parts of electrical circuits in many common electrical devices.
Capacitors, together with resistors and inductors, belong to the group of passive components in electronic equipment. Small capacitors are used in electronic devices to couple signals between stages of amplifiers, as components of electric filters and tuned circuits, or as parts of power supply systems to smooth rectified current. Larger capacitors are used for energy storage in such applications as strobe lights, as parts of some types of electric motors, or for power factor correction in AC power distribution systems. Standard capacitors have a fixed value of capacitance, but adjustable capacitors are frequently used in tuned circuits. Different types are used depending on required capacitance, working voltage, current handling capacity, and other properties.
While, in absolute figures, the most commonly manufactured capacitors are integrated into dynamic random-access memory, flash memory, and other device chips, this article covers the discrete components.
General characteristics
Conventional construction
A conventional capacitor stores electric energy as static electricity by charge separation in an electric field between two electrode plates. The charge carriers are typically electrons, The amount of charge stored per unit voltage is essentially a function of the size of the plates, the plate material's properties, the properties of the dielectric material placed between the plates, and the separation distance (i.e. dielectric thickness). The potential between the plates is limited by the properties of the dielectric material and the separation distance.
Nearly all conventional industrial capacitors except some special styles such as "feed-through capacitors", are constructed as "plate capacitors" even if their electrodes and the dielectric between are wound or rolled. The capacitance, C, of a plate capacitors is:
.
The capacitance increases with the area A of the plates and with the permittivity ε of the dielectric material, and decreases with the plate separation distance d. The capacitance is therefore greatest in devices made from materials with a high permittivity, large plate area, and small distance between plates.
Electrochemical construction
Another type – the electrochemical capacitor – makes use of two other storage principles to store electric energy. In contrast to ceramic, film, and electrolytic capacitors, supercapacitors (also known as electrical double-layer capacitors (EDLC) or ultracapacitors) do not have a conventional dielectric. The capacitance value of an electrochemical capacitor is determined by two high-capacity storage principles. These principles are:
electrostatic storage within Helmholtz double layers achieved on the phase interface between the surface of the electrodes and the electrolyte (double-layer capacitance); and
electrochemical storage achieved by a faradaic electron charge-transfer by specifically absorbed ions with redox reactions (pseudocapacitance). Unlike batteries, in these reactions, the ions simply cling to the atomic structure of an electrode without making or breaking chemical bonds, and no or negligibly small chemical modifications are involved in charge/discharge.
The ratio of the storage resulting from each principle can vary greatly, depending on electrode design and electrolyte composition. Pseudocapacitance can increase the capacitance value by as much as an order of magnitude over that of the double-layer by itself.
Classification
Capacitors are divided into two mechanical groups: Fixed-capacitance devices with a constant capacitance and variable capacitors.
Variable capacitors are made as trimmers, that are typically adjusted only during circuit calibration, and as a device tunable during operation of the electronic instrument.
The most common group is the fixed capacitors. Many are named based on the type of dielectric. For a systematic classification these characteristics cannot be used, because one of the oldest, the electrolytic capacitor, is named instead by its cathode construction. So the most-used names are simply historical.
The most common kinds of capacitors are:
Ceramic capacitors have a ceramic dielectric.
Film and paper capacitors are named for their dielectrics.
Aluminum, tantalum and niobium electrolytic capacitors are named after the material used as the anode and the construction of the cathode (electrolyte)
Polymer capacitors are aluminum, tantalum or niobium electrolytic capacitors with conductive polymer as electrolyte
Supercapacitor is the family name for:
Double-layer capacitors were named for the physical phenomenon of the Helmholtz double-layer
Pseudocapacitors were named for their ability to store electric energy electro-chemically with reversible faradaic charge-transfer
Hybrid capacitors combine double-layer and pseudocapacitors to increase power density
Silver mica, glass, silicon, air-gap and vacuum capacitors are named for their dielectric.
In addition to the above shown capacitor types, which derived their name from historical development, there are many individual capacitors that have been named based on their application. They include:
Power capacitors, motor capacitors, DC-link capacitors, suppression capacitors, audio crossover capacitors, lighting ballast capacitors, snubber capacitors, coupling, decoupling or bypassing capacitors.
Often, more than one capacitor family is employed for these applications, e.g. interference suppression can use ceramic capacitors or film capacitors.
Other kinds of capacitors are discussed in the #Special capacitors section.
Dielectrics
The most common dielectrics are:
Ceramics
Plastic films
Oxide layer on metal (aluminum, tantalum, niobium)
Natural materials like mica, glass, paper, air, SF6, vacuum
All of them store their electrical charge statically within an electric field between two (parallel) electrodes.
Beneath this conventional capacitors a family of electrochemical capacitors called supercapacitors was developed. Supercapacitors do not have a conventional dielectric. They store their electrical charge statically in Helmholtz double-layers and faradaically at the surface of electrodes
with static double-layer capacitance in a double-layer capacitor and
with pseudocapacitance (faradaic charge transfer) in a pseudocapacitor
or with both storage principles together in hybrid capacitors.
The most important material parameters of the different dielectrics used and the approximate Helmholtz-layer thickness are given in the table below.
The capacitor's plate area can be adapted to the wanted capacitance value. The permittivity and the dielectric thickness are the determining parameter for capacitors. Ease of processing is also crucial. Thin, mechanically flexible sheets can be wrapped or stacked easily, yielding large designs with high capacitance values. Razor-thin metallized sintered ceramic layers covered with metallized electrodes however, offer the best conditions for the miniaturization of circuits with SMD styles.
A short view to the figures in the table above gives the explanation for some simple facts:
Supercapacitors have the highest capacitance density because of their special charge storage principles
Electrolytic capacitors have lesser capacitance density than supercapacitors but the highest capacitance density of conventional capacitors due to the thin dielectric.
Ceramic capacitors class 2 have much higher capacitance values in a given case than class 1 capacitors because of their much higher permittivity.
Film capacitors with their different plastic film material do have a small spread in the dimensions for a given capacitance/voltage value of a film capacitor because the minimum dielectric film thickness differs between the different film materials.
Capacitance and voltage range
Capacitance ranges from picofarads to more than hundreds of farads. Voltage ratings can reach 100 kilovolts. In general, capacitance and voltage correlate with physical size and cost.
Miniaturization
As in other areas of electronics, volumetric efficiency measures the performance of electronic function per unit volume. For capacitors, the volumetric efficiency is measured with the "CV product", calculated by multiplying the capacitance (C) by the maximum voltage rating (V), divided by the volume. From 1970 to 2005, volumetric efficiencies have improved dramatically.
Overlapping range of the applications
These individual capacitors can perform their application independent of their affiliation to an above shown capacitor type, so that an overlapping range of applications between the different capacitor types exists.
Types and styles
Ceramic capacitors
A ceramic capacitor is a non-polarized fixed capacitor made out of two or more alternating layers of ceramic and metal in which the ceramic material acts as the dielectric and the metal acts as the electrodes. The ceramic material is a mixture of finely ground granules of paraelectric or ferroelectric materials, modified by mixed oxides that are necessary to achieve the capacitor's desired characteristics. The electrical behavior of the ceramic material is divided into two stability classes:
Class 1 ceramic capacitors with high stability and low losses compensating the influence of temperature in resonant circuit application. Common EIA/IEC code abbreviations are C0G/NP0, P2G/N150, R2G/N220, U2J/N750 etc.
Class 2 ceramic capacitors with high volumetric efficiency for buffer, by-pass and coupling applications Common EIA/IEC code abbreviations are: X7R/2XI, Z5U/E26, Y5V/2F4, X7S/2C1, etc.
The great plasticity of ceramic raw material works well for many special applications and enables an enormous diversity of styles, shapes and great dimensional spread of ceramic capacitors. The smallest discrete capacitor, for instance, is a "01005" chip capacitor with the dimension of only 0.4 mm × 0.2 mm.
The construction of ceramic multilayer capacitors with mostly alternating layers results in single capacitors connected in parallel. This configuration increases capacitance and decreases all losses and parasitic inductances. Ceramic capacitors are well-suited for high frequencies and high current pulse loads.
Because the thickness of the ceramic dielectric layer can be easily controlled and produced by the desired application voltage, ceramic capacitors are available with rated voltages up to the 30 kV range.
Some ceramic capacitors of special shapes and styles are used as capacitors for special applications, including RFI/EMI suppression capacitors for connection to supply mains, also known as safety capacitors, X2Y and three-terminal capacitors for bypassing and decoupling applications, feed-through capacitors for noise suppression by low-pass filters and ceramic power capacitors for transmitters and HF applications.
Film capacitors
Film capacitors or plastic film capacitors are non-polarized capacitors with an insulating plastic film as the dielectric. The dielectric films are drawn to a thin layer, provided with metallic electrodes and wound into a cylindrical winding. The electrodes of film capacitors may be metallized aluminum or zinc, applied on one or both sides of the plastic film, resulting in metallized film capacitors or a separate metallic foil overlying the film, called film/foil capacitors.
Metallized film capacitors offer self-healing properties. Dielectric breakdowns or shorts between the electrodes do not destroy the component. The metallized construction makes it possible to produce wound capacitors with larger capacitance values (up to 100 μF and larger) in smaller cases than within film/foil construction.
Film/foil capacitors or metal foil capacitors use two plastic films as the dielectric. Each film is covered with a thin metal foil, mostly aluminium, to form the electrodes. The advantage of this construction is the ease of connecting the metal foil electrodes, along with an excellent current pulse strength.
A key advantage of every film capacitor's internal construction is direct contact to the electrodes on both ends of the winding. This contact keeps all current paths very short. The design behaves like a large number of individual capacitors connected in parallel, thus reducing the internal ohmic losses (equivalent series resistance or ESR) and equivalent series inductance (ESL). The inherent geometry of film capacitor structure results in low ohmic losses and a low parasitic inductance, which makes them suitable for applications with high surge currents (snubbers) and for AC power applications, or for applications at higher frequencies.
The plastic films used as the dielectric for film capacitors are polypropylene (PP), polyester (PET), polyphenylene sulfide (PPS), polyethylene naphthalate (PEN), and polytetrafluoroethylene (PTFE). Polypropylene has a market share of about 50% and polyester with about 40% are the most used film materials. The other 10% use all the other materials, including PPS and paper with roughly 3% each.
Some film capacitors of special shapes and styles are used as capacitors for special applications, including RFI/EMI suppression capacitors for connection to the supply mains, also known as safety capacitors, snubber capacitors for very high surge currents, motor run capacitors and AC capacitors for motor-run applications.
Power film capacitors
A related type is the power film capacitor. The materials and construction techniques used for large power film capacitors mostly are similar to those of ordinary film capacitors. However, capacitors with high to very high power ratings for applications in power systems and electrical installations are often classified separately, for historical reasons. The standardization of ordinary film capacitors is oriented on electrical and mechanical parameters. The standardization of power capacitors by contrast emphasizes the safety of personnel and equipment, as given by the local regulating authority.
As modern electronic equipment gained the capacity to handle power levels that were previously the exclusive domain of "electrical power" components, the distinction between the "electronic" and "electrical" power ratings blurred. Historically, the boundary between these two families was approximately at a reactive power of 200 volt-amperes.
Film power capacitors mostly use polypropylene film as the dielectric. Other types include metallized paper capacitors (MP capacitors) and mixed dielectric film capacitors with polypropylene dielectrics. MP capacitors serve for cost applications and as field-free carrier electrodes (soggy foil capacitors) for high AC or high current pulse loads. Windings can be filled with an insulating oil or with epoxy resin to reduce air bubbles, thereby preventing short circuits.
They find use as converters to change voltage, current or frequency, to store or deliver abruptly electric energy or to improve the power factor. The rated voltage range of these capacitors is from approximately 120 V AC (capacitive lighting ballasts) to 100 kV.
Electrolytic capacitors
Electrolytic capacitors have a metallic anode covered with an oxidized layer used as dielectric. The second electrode is a non-solid (wet) or solid electrolyte. Electrolytic capacitors are polarized. Three families are available, categorized according to their dielectric.
Aluminum electrolytic capacitors with aluminum oxide as dielectric
Tantalum electrolytic capacitors with tantalum pentoxide as dielectric
Niobium electrolytic capacitors with niobium pentoxide as dielectric.
The anode is highly roughened to increase the surface area. This and the relatively high permittivity of the oxide layer gives these capacitors very high capacitance per unit volume compared with film- or ceramic capacitors.
The permittivity of tantalum pentoxide is approximately three times higher than aluminium oxide, producing significantly smaller components. However, permittivity determines only the dimensions. Electrical parameters, especially conductivity, are established by the electrolyte's material and composition. Three general types of electrolytes are used:
non solid (wet, liquid)—conductivity approximately 10 mS/cm and are the lowest cost
solid manganese oxide—conductivity approximately 100 mS/cm offer high quality and stability
solid conductive polymer (Polypyrrole or PEDOT:PSS)—conductivity approximately 100...500 S/cm, offer ESR values as low as <10 mΩ
Internal losses of electrolytic capacitors, prevailing used for decoupling and buffering applications, are determined by the kind of electrolyte.
The large capacitance per unit volume of electrolytic capacitors make them valuable in relatively high-current and low-frequency electrical circuits, e.g. in power supply filters for decoupling unwanted AC components from DC power connections or as coupling capacitors in audio amplifiers, for passing or bypassing low-frequency signals and storing large amounts of energy. The relatively high capacitance value of an electrolytic capacitor combined with the very low ESR of the polymer electrolyte of polymer capacitors, especially in SMD styles, makes them a competitor to MLC chip capacitors in personal computer power supplies.
Bipolar aluminum electrolytic capacitors (also called Non-Polarized capacitors) contain two anodized aluminium foils, behaving like two capacitors connected in series opposition.
Electrolytic capacitors for special applications include motor start capacitors, flashlight capacitors and audio frequency capacitors.
Supercapacitors
Supercapacitors (SC), comprise a family of electrochemical capacitors. Supercapacitor, sometimes called ultracapacitor is a generic term for electric double-layer capacitors (EDLC), pseudocapacitors and hybrid capacitors. They don't have a conventional solid dielectric. The capacitance value of an electrochemical capacitor is determined by two storage principles, both of which contribute to the total capacitance of the capacitor:
Double-layer capacitance – Storage is achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor and an electrolytic solution. The distance of separation of charge in a double-layer is on the order of a few Angstroms (0.3–0.8 nm). This storage is electrostatic in origin.
Pseudocapacitance – Storage is achieved by redox reactions, electroabsorption or intercalation on the surface of the electrode or by specifically absorbed ions that results in a reversible faradaic charge-transfer. The pseudocapacitance is faradaic in origin.
The ratio of the storage resulting from each principle can vary greatly, depending on electrode design and electrolyte composition. Pseudocapacitance can increase the capacitance value by as much as an order of magnitude over that of the double-layer by itself.
Supercapacitors are divided into three families, based on the design of the electrodes:
Double-layer capacitors – with carbon electrodes or derivates with much higher static double-layer capacitance than the faradaic pseudocapacitance
Pseudocapacitors – with electrodes out of metal oxides or conducting polymers with a high amount of faradaic pseudocapacitance
Hybrid capacitors – capacitors with special and asymmetric electrodes that exhibit both significant double-layer capacitance and pseudocapacitance, such as lithium-ion capacitors
Supercapacitors bridge the gap between conventional capacitors and rechargeable batteries. They have the highest available capacitance values per unit volume and the greatest energy density of all capacitors. They support up to 12,000 farads/1.2 volt, with capacitance values up to 10,000 times that of electrolytic capacitors. While existing supercapacitors have energy densities that are approximately 10% of a conventional battery, their power density is generally 10 to 100 times greater. Power density is defined as the product of energy density, multiplied by the speed at which the energy is delivered to the load. The greater power density results in much shorter charge/discharge cycles than a battery is capable, and a greater tolerance for numerous charge/discharge cycles. This makes them well-suited for parallel connection with batteries, and may improve battery performance in terms of power density.
Within electrochemical capacitors, the electrolyte is the conductive connection between the two electrodes, distinguishing them from electrolytic capacitors, in which the electrolyte only forms the cathode, the second electrode.
Supercapacitors are polarized and must operate with correct polarity. Polarity is controlled by design with asymmetric electrodes, or, for symmetric electrodes, by a potential applied during the manufacturing process.
Supercapacitors support a broad spectrum of applications for power and energy requirements, including:
Low supply current during longer times for memory backup in (SRAMs) in electronic equipment
Power electronics that require very short, high current, as in the KERS system in Formula 1 cars
Recovery of braking energy for vehicles such as buses and trains
Supercapacitors are rarely interchangeable, especially those with higher energy densities. IEC standard 62391-1 Fixed electric double layer capacitors for use in electronic equipment identifies four application classes:
Class 1, Memory backup, discharge current in mA = 1 • C (F)
Class 2, Energy storage, discharge current in mA = 0.4 • C (F) • V (V)
Class 3, Power, discharge current in mA = 4 • C (F) • V (V)
Class 4, Instantaneous power, discharge current in mA = 40 • C (F) • V (V)
Exceptional for electronic components like capacitors are the manifold different trade or series names used for supercapacitors like: APowerCap, BestCap, BoostCap, CAP-XX, DLCAP, EneCapTen, EVerCAP, DynaCap, Faradcap, GreenCap, Goldcap, HY-CAP, Kapton capacitor, Super capacitor, SuperCap, PAS Capacitor, PowerStor, PseudoCap, Ultracapacitor making it difficult for users to classify these capacitors.
Class X and Class Y capacitors
Many safety regulations mandate that Class X or Class Y capacitors must be used whenever a "fail-to-short-circuit" could put humans in danger,
to guarantee galvanic isolation even when the capacitor fails.
Lightning strikes and other sources cause high voltage surges in mains power.
Safety capacitors protect humans and devices from high voltage surges by shunting the surge energy to ground.
In particular, safety regulations mandate a particular arrangement of Class X and Class Y mains filtering capacitors.
In principle, any dielectric could be used to build Class X and Class Y capacitors; perhaps by including an internal fuse to improve safety.
In practice, capacitors that meet Class X and Class Y specifications are typically
ceramic RFI/EMI suppression capacitors or
plastic film RFI/EMI suppression capacitors.
Miscellaneous capacitors
Beneath the above described capacitors covering more or less nearly the total market of discrete capacitors some new developments or very special capacitor types as well as older types can be found in electronics.
Integrated capacitors
Integrated capacitors—in integrated circuits, nano-scale capacitors can be formed by appropriate patterns of metallization on an isolating substrate. They may be packaged in multiple capacitor arrays with no other semiconductive parts as discrete components.
Glass capacitors—First Leyden jar capacitor was made of glass, glass capacitors were in use as SMD version for applications requiring ultra-reliable and ultra-stable service.
Power capacitors
Vacuum capacitors—used in high power RF transmitters
SF6 gas filled capacitors—used as capacitance standard in measuring bridge circuits
Special capacitors
Printed circuit boards—metal conductive areas in different layers of a multi-layer printed circuit board can act as a highly stable capacitor in Distributed-element filters. It is common industry practice to fill unused areas of one PCB layer with the ground conductor and another layer with the power conductor, forming a large distributed capacitor between the layers.
Wire—2 pieces of insulated wire twisted together. Capacitance values usually range from 3 pF to 15 pF. Used in homemade VHF circuits for oscillation feedback.
Specialized devices such as built-in capacitors with metal conductive areas in different layers of a multi-layer printed circuit board and kludges such as twisting together two pieces of insulated wire also exist.
Capacitors made by twisting 2 pieces of insulated wire together are called gimmick capacitors.
Gimmick capacitors were used in commercial and amateur radio receivers.
Obsolete capacitors
Leyden jars the earliest known capacitor
Clamped mica capacitors—the first capacitors with stable frequency behavior and low losses, used for military radio applications during World War II
Air-gap capacitors—used by the first spark-gap transmitters
Variable capacitors
Variable capacitors may have their capacitance changed by mechanical motion. There are two main types:
Tuning capacitor – variable capacitor for intentionally and repeatedly tuning an oscillator circuit in a radio or another tuned circuit
Trimmer capacitor – small variable capacitor usually for one-time oscillator circuit internal adjustment
Variable capacitors include capacitors that use a mechanical construction to change the distance between the plates, or the amount of plate surface area which overlaps. They mostly use air as dielectric medium.
Semiconductive variable capacitance diodes are not capacitors in the sense of passive components but can change their capacitance as a function of the applied reverse bias voltage and are used like a variable capacitor. They have replaced much of the tuning and trimmer capacitors.
Comparison of types
Electrical characteristics
Series-equivalent circuit
Discrete capacitors deviate from the ideal capacitor. An ideal capacitor only stores and releases electrical energy, with no dissipation. Capacitor components have losses and parasitic inductive parts. These imperfections in material and construction can have positive implications such as linear frequency and temperature behavior in class 1 ceramic capacitors. Conversely, negative implications include the non-linear, voltage-dependent capacitance in class 2 ceramic capacitors or the insufficient dielectric insulation of capacitors leading to leakage currents.
All properties can be defined and specified by a series equivalent circuit composed out of an idealized capacitance and additional electrical components which model all losses and inductive parameters of a capacitor. In this series-equivalent circuit the electrical characteristics are defined by:
C, the capacitance of the capacitor
Rinsul, the insulation resistance of the dielectric, not to be confused with the insulation of the housing
Rleak, the resistance representing the leakage current of the capacitor
RESR, the equivalent series resistance which summarizes all ohmic losses of the capacitor, usually abbreviated as "ESR"
LESL, the equivalent series inductance which is the effective self-inductance of the capacitor, usually abbreviated as "ESL".
Using a series equivalent circuit instead of a parallel equivalent circuit is specified by IEC/EN 60384–1.
Standard capacitance values and tolerances
The rated capacitance CR or nominal capacitance CN is the value for which the capacitor has been designed. Actual capacitance depends on the measured frequency and ambient temperature. Standard measuring conditions are a low-voltage AC measuring method at a temperature of 20 °C with frequencies of
100 kHz, 1 MHz (preferred) or 10 MHz for non-electrolytic capacitors with CR ≤ 1 nF:
1 kHz or 10 kHz for non-electrolytic capacitors with 1 nF < CR ≤ 10 μF
100/120 Hz for electrolytic capacitors
50/60 Hz or 100/120 Hz for non-electrolytic capacitors with CR > 10 μF
For supercapacitors a voltage drop method is applied for measuring the capacitance value. .
Capacitors are available in geometrically increasing preferred values (E series standards) specified in IEC/EN 60063. According to the number of values per decade, these were called the E3, E6, E12, E24 etc. series. The range of units used to specify capacitor values has expanded to include everything from pico- (pF), nano- (nF) and microfarad (μF) to farad (F). Millifarad and kilofarad are uncommon.
The percentage of allowed deviation from the rated value is called tolerance. The actual capacitance value should be within its tolerance limits, or it is out of specification. IEC/EN 60062 specifies a letter code for each tolerance.
The required tolerance is determined by the particular application. The narrow tolerances of E24 to E96 are used for high-quality circuits such as precision oscillators and timers. General applications such as non-critical filtering or coupling circuits employ E12 or E6. Electrolytic capacitors, which are often used for filtering and bypassing capacitors mostly have a tolerance range of ±20% and need to conform to E6 (or E3) series values.
Temperature dependence
Capacitance typically varies with temperature. The different dielectrics express great differences in temperature sensitivity. The temperature coefficient is expressed in parts per million (ppm) per degree Celsius for class 1 ceramic capacitors or in % over the total temperature range for all others.
Frequency dependence
Most discrete capacitor types have more or less capacitance changes with increasing frequencies. The dielectric strength of class 2 ceramic and plastic film diminishes with rising frequency. Therefore, their capacitance value decreases with increasing frequency. This phenomenon for ceramic class 2 and plastic film dielectrics is related to dielectric relaxation in which the time constant of the electrical dipoles is the reason for the frequency dependence of permittivity. The graphs below show typical frequency behavior of the capacitance for ceramic and film capacitors.
For electrolytic capacitors with non-solid electrolyte, mechanical motion of the ions occurs. Their movability is limited so that at higher frequencies not all areas of the roughened anode structure are covered with charge-carrying ions. As higher the anode structure is roughened as more the capacitance value decreases with increasing frequency. Low voltage types with highly roughened anodes display capacitance at 100 kHz approximately 10 to 20% of the value measured at 100 Hz.
Voltage dependence
Capacitance may also change with applied voltage. This effect is more prevalent in class 2 ceramic capacitors. The permittivity of ferroelectric class 2 material depends on the applied voltage. Higher applied voltage lowers permittivity. The change of capacitance can drop to 80% of the value measured with the standardized measuring voltage of 0.5 or 1.0 V. This behavior is a small source of non-linearity in low-distortion filters and other analog applications. In audio applications this can cause distortion (measured using THD).
Film capacitors and electrolytic capacitors have no significant voltage dependence.
Rated and category voltage
The voltage at which the dielectric becomes conductive is called the breakdown voltage, and is given by the product of the dielectric strength and the separation between the electrodes. The dielectric strength depends on temperature, frequency, shape of the electrodes, etc. Because a breakdown in a capacitor normally is a short circuit and destroys the component, the operating voltage is lower than the breakdown voltage. The operating voltage is specified such that the voltage may be applied continuously throughout the life of the capacitor.
In IEC/EN 60384-1 the allowed operating voltage is called "rated voltage" or "nominal voltage". The rated voltage (UR) is the maximum DC voltage or peak pulse voltage that may be applied continuously at any temperature within the rated temperature range.
The voltage proof of nearly all capacitors decreases with increasing temperature. Some applications require a higher temperature range. Lowering the voltage applied at a higher temperature maintains safety margins. For some capacitor types therefore the IEC standard specify a second "temperature derated voltage" for a higher temperature range, the "category voltage". The category voltage (UC) is the maximum DC voltage or peak pulse voltage that may be applied continuously to a capacitor at any temperature within the category temperature range.
The relation between both voltages and temperatures is given in the picture right.
Impedance
In general, a capacitor is seen as a storage component for electric energy. But this is only one capacitor function. A capacitor can also act as an AC resistor. In many cases the capacitor is used as a decoupling capacitor to filter or bypass undesired biased AC frequencies to the ground. Other applications use capacitors for capacitive coupling of AC signals; the dielectric is used only for blocking DC. For such applications the AC resistance is as important as the capacitance value.
The frequency dependent AC resistance is called impedance and is the complex ratio of the voltage to the current in an AC circuit. Impedance extends the concept of resistance to AC circuits and possesses both magnitude and phase at a particular frequency. This is unlike resistance, which has only magnitude.
The magnitude represents the ratio of the voltage difference amplitude to the current amplitude, is the imaginary unit, while the argument gives the phase difference between voltage and current.
In capacitor data sheets, only the impedance magnitude |Z| is specified, and simply written as "Z" so that the formula for the impedance can be written in Cartesian form
where the real part of impedance is the resistance (for capacitors ) and the imaginary part is the reactance .
As shown in a capacitor's series-equivalent circuit, the real component includes an ideal capacitor , an inductance and a resistor . The total reactance at the angular frequency therefore is given by the geometric (complex) addition of a capacitive reactance (Capacitance) and an inductive reactance (Inductance): .
To calculate the impedance the resistance has to be added geometrically and then is given by
. The impedance is a measure of the capacitor's ability to pass alternating currents. In this sense the impedance can be used like Ohms law
to calculate either the peak or the effective value of the current or the voltage.
In the special case of resonance, in which the both reactive resistances
and
have the same value (), then the impedance will only be determined by .
The impedance specified in the datasheets often show typical curves for the different capacitance values. With increasing frequency as the impedance decreases down to a minimum. The lower the impedance, the more easily alternating currents can be passed through the capacitor. At the apex, the point of resonance, where XC has the same value than XL, the capacitor has the lowest impedance value. Here only the ESR determines the impedance. With frequencies above the resonance the impedance increases again due to the ESL of the capacitor. The capacitor becomes an inductance.
As shown in the graph, the higher capacitance values can fit the lower frequencies better while the lower capacitance values can fit better the higher frequencies.
Aluminum electrolytic capacitors have relatively good decoupling properties in the lower frequency range up to about 1 MHz due to their large capacitance values. This is the reason for using electrolytic capacitors in standard or switched-mode power supplies behind the rectifier for smoothing application.
Ceramic and film capacitors are already out of their smaller capacitance values suitable for higher frequencies up to several 100 MHz. They also have significantly lower parasitic inductance, making them suitable for higher frequency applications, due to their construction with end-surface contacting of the electrodes. To increase the range of frequencies, often an electrolytic capacitor is connected in parallel with a ceramic or film capacitor.
Many new developments are targeted at reducing parasitic inductance (ESL). This increases the resonance frequency of the capacitor and, for example, can follow the constantly increasing switching speed of digital circuits. Miniaturization, especially in the SMD multilayer ceramic chip capacitors (MLCC), increases the resonance frequency. Parasitic inductance is further lowered by placing the electrodes on the longitudinal side of the chip instead of the lateral side. The "face-down" construction associated with multi-anode technology in tantalum electrolytic capacitors further reduced ESL. Capacitor families such as the so-called MOS capacitor or silicon capacitors offer solutions when capacitors at frequencies up to the GHz range are needed.
Inductance (ESL) and self-resonant frequency
ESL in industrial capacitors is mainly caused by the leads and internal connections used to connect the capacitor plates to the outside world. Large capacitors tend to have higher ESL than small ones because the distances to the plate are longer and every mm counts as an inductance.
For any discrete capacitor, there is a frequency above DC at which it ceases to behave as a pure capacitor. This frequency, where is as high as , is called the self-resonant frequency. The self-resonant frequency is the lowest frequency at which the impedance passes through a minimum. For any AC application the self-resonant frequency is the highest frequency at which capacitors can be used as a capacitive component.
This is critically important for decoupling high-speed logic circuits from the power supply. The decoupling capacitor supplies transient current to the chip. Without decouplers, the IC demands current faster than the connection to the power supply can supply it, as parts of the circuit rapidly switch on and off. To counter this potential problem, circuits frequently use multiple bypass capacitors—small (100 nF or less) capacitors rated for high frequencies, a large electrolytic capacitor rated for lower frequencies and occasionally, an intermediate value capacitor.
Ohmic losses, ESR, dissipation factor, and quality factor
The summarized losses in discrete capacitors are ohmic AC losses. DC losses are specified as "leakage current" or "insulating resistance" and are negligible for an AC specification. AC losses are non-linear, possibly depending on frequency, temperature, age or humidity. The losses result from two physical conditions:
line losses including internal supply line resistances, the contact resistance of the electrode contact, line resistance of the electrodes, and in "wet" aluminum electrolytic capacitors and especially supercapacitors, the limited conductivity of liquid electrolytes and
dielectric losses from dielectric polarization.
The largest share of these losses in larger capacitors is usually the frequency dependent ohmic dielectric losses. For smaller components, especially for wet electrolytic capacitors, conductivity of liquid electrolytes may exceed dielectric losses. To measure these losses, the measurement frequency must be set. Since commercially available components offer capacitance values cover 15 orders of magnitude, ranging from pF (10−12 F) to some 1000 F in supercapacitors, it is not possible to capture the entire range with only one frequency. IEC 60384-1 states that ohmic losses should be measured at the same frequency used to measure capacitance. These are:
100 kHz, 1 MHz (preferred) or 10 MHz for non-electrolytic capacitors with CR ≤ 1 nF:
1 kHz or 10 kHz for non-electrolytic capacitors with 1 nF < CR ≤ 10 μF
100/120 Hz for electrolytic capacitors
50/60 Hz or 100/120 Hz for non-electrolytic capacitors with CR > 10 μF
A capacitor's summarized resistive losses may be specified either as ESR, as a dissipation factor(DF, tan δ), or as quality factor (Q), depending on application requirements.
Capacitors with higher ripple current loads, such as electrolytic capacitors, are specified with equivalent series resistance ESR. ESR can be shown as an ohmic part in the above vector diagram. ESR values are specified in datasheets per individual type.
The losses of film capacitors and some class 2 ceramic capacitors are mostly specified with the dissipation factor tan δ. These capacitors have smaller losses than electrolytic capacitors and mostly are used at higher frequencies up to some hundred MHz. However the numeric value of the dissipation factor, measured at the same frequency, is independent of the capacitance value and can be specified for a capacitor series with a range of capacitance. The dissipation factor is determined as the tangent of the reactance () and the ESR, and can be shown as the angle δ between imaginary and the impedance axis.
If the inductance is small, the dissipation factor can be approximated as:
Capacitors with very low losses, such as ceramic Class 1 and Class 2 capacitors, specify resistive losses with a quality factor (Q). Ceramic Class 1 capacitors are especially suitable for LC resonant circuits with frequencies up to the GHz range, and precise high and low pass filters. For an electrically resonant system, Q represents the effect of electrical resistance and characterizes a resonator's bandwidth relative to its center or resonant frequency . Q is defined as the reciprocal value of the dissipation factor.
A high Q value is for resonant circuits a mark of the quality of the resonance.
Limiting current loads
A capacitor can act as an AC resistor, coupling AC voltage and AC current between two points. Every AC current flow through a capacitor generates heat inside the capacitor body. These dissipation power loss is caused by and is the squared value of the effective (RMS) current
The same power loss can be written with the dissipation factor as
The internal generated heat has to be distributed to the ambient. The temperature of the capacitor, which is established on the balance between heat produced and distributed, shall not exceed the capacitors maximum specified temperature. Hence, the ESR or dissipation factor is a mark for the maximum power (AC load, ripple current, pulse load, etc.) a capacitor is specified for.
AC currents may be a:
ripple current—an effective (RMS) AC current, coming from an AC voltage superimposed of a DC bias, a
pulse current—an AC peak current, coming from a voltage peak, or an
AC current—an effective (RMS) sinusoidal current
Ripple and AC currents mainly warms the capacitor body. By this currents internal generated temperature influences the breakdown voltage of the dielectric. Higher temperature lower the voltage proof of all capacitors. In wet electrolytic capacitors higher temperatures force the evaporation of electrolytes, shortening the life time of the capacitors. In film capacitors higher temperatures may shrink the plastic film changing the capacitor's properties.
Pulse currents, especially in metallized film capacitors, heat the contact areas between end spray (schoopage) and metallized electrodes. This may reduce the contact to the electrodes, heightening the dissipation factor.
For safe operation, the maximal temperature generated by any AC current flow through the capacitor is a limiting factor, which in turn limits AC load, ripple current, pulse load, etc.
Ripple current
A "ripple current" is the RMS value of a superimposed AC current of any frequency and any waveform of the current curve for continuous operation at a specified temperature. It arises mainly in power supplies (including switched-mode power supplies) after rectifying an AC voltage and flows as charge and discharge current through the decoupling or smoothing capacitor. The "rated ripple current" shall not exceed a temperature rise of 3, 5 or 10 °C, depending on the capacitor type, at the specified maximum ambient temperature.
Ripple current generates heat within the capacitor body due to the ESR of the capacitor. The components of capacitor ESR are: the dielectric losses caused by the changing field strength in the dielectric, the resistance of the supply conductor, and the resistance of the electrolyte. For an electric double layer capacitor (ELDC) these resistance values can be derived from a Nyquist plot of the capacitor's complex impedance.
ESR is dependent on frequency and temperature. For ceramic and film capacitors in generally ESR decreases with increasing temperatures but heighten with higher frequencies due to increasing dielectric losses. For electrolytic capacitors up to roughly 1 MHz ESR decreases with increasing frequencies and temperatures.
The types of capacitors used for power applications have a specified rated value for maximum ripple current. These are primarily aluminum electrolytic capacitors, and tantalum as well as some film capacitors and Class 2 ceramic capacitors.
Aluminum electrolytic capacitors, the most common type for power supplies, experience shorter life expectancy at higher ripple currents. Exceeding the limit tends to result in explosive failure.
Tantalum electrolytic capacitors with solid manganese dioxide electrolyte are also limited by ripple current. Exceeding their ripple limits tends to shorts and burning components.
For film and ceramic capacitors, normally specified with a loss factor tan δ, the ripple current limit is determined by temperature rise in the body of approximately 10 °C. Exceeding this limit may destroy the internal structure and cause shorts.
Pulse current
The rated pulse load for a certain capacitor is limited by the rated voltage, the pulse repetition frequency, temperature range and pulse rise time. The "pulse rise time" , represents the steepest voltage gradient of the pulse (rise or fall time) and is expressed in volts per μs (V/μs).
The rated pulse rise time is also indirectly the maximum capacity of an applicable peak current . The peak current is defined as:
where: is in A; in μF; in V/μs
The permissible pulse current capacity of a metallized film capacitor generally allows an internal temperature rise of 8 to 10 K.
In the case of metallized film capacitors, pulse load depends on the properties of the dielectric material, the thickness of the metallization and the capacitor's construction, especially the construction of the contact areas between the end spray and metallized electrodes. High peak currents may lead to selective overheating of local contacts between end spray and metallized electrodes which may destroy some of the contacts, leading to increasing ESR.
For metallized film capacitors, so-called pulse tests simulate the pulse load that might occur during an application, according to a standard specification. IEC 60384 part 1, specifies that the test circuit is charged and discharged intermittently. The test voltage corresponds to the rated DC voltage and the test comprises 10000 pulses with a repetition frequency of 1 Hz. The pulse stress capacity is the pulse rise time. The rated pulse rise time is specified as 1/10 of the test pulse rise time.
The pulse load must be calculated for each application. A general rule for calculating the power handling of film capacitors is not available because of vendor-related internal construction details. To prevent the capacitor from overheating the following operating parameters have to be considered:
peak current per μF
Pulse rise or fall time dv/dt in V/μs
relative duration of charge and discharge periods (pulse shape)
maximum pulse voltage (peak voltage)
peak reverse voltage;
Repetition frequency of the pulse
Ambient temperature
Heat dissipation (cooling)
Higher pulse rise times are permitted for pulse voltage lower than the rated voltage.
Examples for calculations of individual pulse loads are given by many manufactures, e.g. WIMA and Kemet.
AC current
An AC load only can be applied to a non-polarized capacitor. Capacitors for AC applications are primarily film capacitors, metallized paper capacitors, ceramic capacitors and bipolar electrolytic capacitors.
The rated AC load for an AC capacitor is the maximum sinusoidal effective AC current (rms) which may be applied continuously to a capacitor within the specified temperature range. In the datasheets the AC load may be expressed as
rated AC voltage at low frequencies,
rated reactive power at intermediate frequencies,
reduced AC voltage or rated AC current at high frequencies.
The rated AC voltage for film capacitors is generally calculated so that an internal temperature rise of 8 to 10 K is the allowed limit for safe operation. Because dielectric losses increase with increasing frequency, the specified AC voltage has to be derated at higher frequencies. Datasheets for film capacitors specify special curves for derating AC voltages at higher frequencies.
If film capacitors or ceramic capacitors only have a DC specification, the peak value of the AC voltage applied has to be lower than the specified DC voltage.
AC loads can occur in AC motor run capacitors, for voltage doubling, in snubbers, lighting ballast and for PFC for phase shifting to improve transmission network stability and efficiency, which is one of the most important applications for large power capacitors. These mostly large PP film or metallized paper capacitors are limited by the rated reactive power VAr.
Bipolar electrolytic capacitors, to which an AC voltage may be applicable, are specified with a rated ripple current.
Insulation resistance and self-discharge constant
The resistance of the dielectric is finite, leading to some level of DC "leakage current" that causes a charged capacitor to lose charge over time. For ceramic and film capacitors, this resistance is called "insulation resistance Rins". This resistance is represented by the resistor Rins in parallel with the capacitor in the series-equivalent circuit of capacitors.
Insulation resistance must not be confused with the outer isolation of the component with respect to the environment.
The time curve of self-discharge over insulation resistance with decreasing capacitor voltage follows the formula
With stored DC voltage and self-discharge constant
Thus, after voltage drops to 37% of the initial value.
The self-discharge constant is an important parameter for the insulation of the dielectric between the electrodes of ceramic and film capacitors. For example, a capacitor can be used as the time-determining component for time relays or for storing a voltage value as in a sample and hold circuits or operational amplifiers.
Class 1 ceramic capacitors have an insulation resistance of at least 10 GΩ, while class 2 capacitors have at least 4 GΩ or a self-discharge constant of at least 100 s. Plastic film capacitors typically have an insulation resistance of 6 to 12 GΩ. This corresponds to capacitors in the uF range of a self-discharge constant of about 2000–4000 s.
Insulation resistance respectively the self-discharge constant can be reduced if humidity penetrates into the winding. It is partially strongly temperature dependent and decreases with increasing temperature. Both decrease with increasing temperature.
In electrolytic capacitors, the insulation resistance is defined as leakage current.
Leakage current
For electrolytic capacitors the insulation resistance of the dielectric is termed "leakage current". This DC current is represented by the resistor Rleak in parallel with the capacitor in the series-equivalent circuit of electrolytic capacitors. This resistance between the terminals of a capacitor is also finite. Rleak is lower for electrolytics than for ceramic or film capacitors.
The leakage current includes all weak imperfections of the dielectric caused by unwanted chemical processes and mechanical damage. It is also the DC current that can pass through the dielectric after applying a voltage. It depends on the interval without voltage applied (storage time), the thermic stress from soldering, on voltage applied, on temperature of the capacitor, and on measuring time.
The leakage current drops in the first minutes after applying DC voltage. In this period the dielectric oxide layer can self-repair weaknesses by building up new layers. The time required depends generally on the electrolyte. Solid electrolytes drop faster than non-solid electrolytes but remain at a slightly higher level.
The leakage current in non-solid electrolytic capacitors as well as in manganese oxide solid tantalum capacitors decreases with voltage-connected time due to self-healing effects. Although electrolytics leakage current is higher than current flow over insulation resistance in ceramic or film capacitors, the self-discharge of modern non solid electrolytic capacitors takes several weeks.
A particular problem with electrolytic capacitors is storage time. Higher leakage current can be the result of longer storage times. These behaviors are limited to electrolytes with a high percentage of water. Organic solvents such as GBL do not have high leakage with longer storage times.
Leakage current is normally measured 2 or 5 minutes after applying rated voltage.
Microphonics
All ferroelectric materials exhibit a piezoelectric effect. Because Class 2 ceramic capacitors use ferroelectric ceramics dielectric, these types of capacitors may have electrical effects called microphonics. Microphonics (microphony) describes how electronic components transform mechanical vibrations into an undesired electrical signal (noise). The dielectric may absorb mechanical forces from shock or vibration by changing thickness and changing the electrode separation, affecting the capacitance, which in turn induces an AC current. The resulting interference is especially problematic in audio applications, potentially causing feedback or unintended recording.
In the reverse microphonic effect, varying the electric field between the capacitor plates exerts a physical force, turning them into an audio speaker. High current impulse loads or high ripple currents can generate audible sound from the capacitor itself, draining energy and stressing the dielectric.
Dielectric absorption (soakage)
Dielectric absorption occurs when a capacitor that has remained charged for a long time discharges only incompletely when briefly discharged. Although an ideal capacitor would reach zero volts after discharge, real capacitors develop a small voltage from time-delayed dipole discharging, a phenomenon that is also called dielectric relaxation, "soakage" or "battery action".
In many applications of capacitors dielectric absorption is not a problem but in some applications, such as long-time-constant integrators, sample-and-hold circuits, switched-capacitor analog-to-digital converters, and very low-distortion filters, the capacitor must not recover a residual charge after full discharge, so capacitors with low absorption are specified.
The voltage at the terminals generated by the dielectric absorption may in some cases possibly cause problems in the function of an electronic circuit or can be a safety risk to personnel. In order to prevent shocks most very large capacitors are shipped with shorting wires that need to be removed before they are used.
Energy density
The capacitance value depends on the dielectric material (ε), the surface of the electrodes (A) and the distance (d) separating the electrodes and is given by the formula of a plate capacitor:
The separation of the electrodes and the voltage proof of the dielectric material defines the breakdown voltage of the capacitor. The breakdown voltage is proportional to the thickness of the dielectric.
Theoretically, given two capacitors with the same mechanical dimensions and dielectric, but one of them have half the thickness of the dielectric. With the same dimensions this one could place twice the parallel-plate area inside. This capacitor has theoretically 4 times the capacitance as the first capacitor but half of the voltage proof.
Since the energy density stored in a capacitor is given by:
thus a capacitor having a dielectric half as thick as another has 4 times higher capacitance but voltage proof, yielding an equal maximum energy density.
Therefore, dielectric thickness does not affect energy density within a capacitor of fixed overall dimensions. Using a few thick layers of dielectric can support a high voltage, but low capacitance, while thin layers of dielectric produce a low breakdown voltage, but a higher capacitance.
This assumes that neither the electrode surfaces nor the permittivity of the dielectric change with the voltage proof. A simple comparison with two existing capacitor series can show whether reality matches theory. The comparison is easy, because the manufacturers use standardized case sizes or boxes for different capacitance/voltage values within a series.
In reality modern capacitor series do not fit the theory. For electrolytic capacitors the sponge-like rough surface of the anode foil gets smoother with higher voltages, decreasing the surface area of the anode. But because the energy increases squared with the voltage, and the surface of the anode decreases lesser than the voltage proof, the energy density increases clearly. For film capacitors the permittivity changes with dielectric thickness and other mechanical parameters so that the deviation from the theory has other reasons.
Comparing the capacitors from the table with a supercapacitor, the highest energy density capacitor family. For this, the capacitor 25 F/2.3 V in dimensions D × H = 16 mm × 26 mm from Maxwell HC Series, compared with the electrolytic capacitor of approximately equal size in the table. This supercapacitor has roughly 5000 times higher capacitance than the 4700/10 electrolytic capacitor but of the voltage and has about 66,000 mWs (0.018 Wh) stored electrical energy, approximately 100 times higher energy density (40 to 280 times) than the electrolytic capacitor.
Long time behavior, aging
Electrical parameters of capacitors may change over time during storage and application. The reasons for parameter changings are different, it may be a property of the dielectric, environmental influences, chemical processes or drying-out effects for non-solid materials.
Aging
In ferroelectric Class 2 ceramic capacitors, capacitance decreases over time. This behavior is called "aging". This aging occurs in ferroelectric dielectrics, where domains of polarization in the dielectric contribute to the total polarization. Degradation of polarized domains in the dielectric decreases permittivity and therefore capacitance over time. The aging follows a logarithmic law. This defines the decrease of capacitance as constant percentage for a time decade after the soldering recovery time at a defined temperature, for example, in the period from 1 to 10 hours at 20 °C. As the law is logarithmic, the percentage loss of capacitance will twice between 1 h and 100 h and 3 times between 1 h and 1,000 h and so on. Aging is fastest near the beginning, and the absolute capacitance value stabilizes over time.
The rate of aging of Class 2 ceramic capacitors depends mainly on its materials. Generally, the higher the temperature dependence of the ceramic, the higher the aging percentage. The typical aging of X7R ceramic capacitors is about 2.5% per decade. The aging rate of Z5U ceramic capacitors is significantly higher and can be up to 7% per decade.
The aging process of Class 2 ceramic capacitors may be reversed by heating the component above the Curie point.
Class 1 ceramic capacitors and film capacitors do not have ferroelectric-related aging. Environmental influences such as higher temperature, high humidity and mechanical stress can, over a longer period, lead to a small irreversible change in the capacitance value sometimes called aging, too.
The change of capacitance for P 100 and N 470 Class 1 ceramic capacitors is lower than 1%, for capacitors with N 750 to N 1500 ceramics it is ≤ 2%. Film capacitors may lose capacitance due to self-healing processes or gain it due to humidity influences. Typical changes over 2 years at 40 °C are, for example, ±3% for PE film capacitors and ±1% PP film capacitors.
Life time
Electrolytic capacitors with non-solid electrolyte age as the electrolyte evaporates. This evaporation depends on temperature and the current load the capacitors experience. Electrolyte escape influences capacitance and ESR. Capacitance decreases and the ESR increases over time. In contrast to ceramic, film and electrolytic capacitors with solid electrolytes, "wet" electrolytic capacitors reach a specified "end of life" reaching a specified maximum change of capacitance or ESR. End of life, "load life" or "lifetime" can be estimated either by formula or diagrams or roughly by a so-called "10-degree-law". A typical specification for an electrolytic capacitor states a lifetime of 2,000 hours at 85 °C, doubling for every 10 degrees lower temperature, achieving lifespan of approximately 15 years at room temperature.
Supercapacitors also experience electrolyte evaporation over time. Estimation is similar to wet electrolytic capacitors. Additional to temperature the voltage and current load influence the life time. Lower voltage than rated voltage and lower current loads as well as lower temperature extend the life time.
Failure rate
Capacitors are reliable components with low failure rates, achieving life expectancies of decades under normal conditions. Most capacitors pass a test at the end of production similar to a "burn in", so that early failures are found during production, reducing the number of post-shipment failures.
Reliability for capacitors is usually specified in numbers of Failures In Time (FIT) during the period of constant random failures. FIT is the number of failures that can be expected in one billion (109) component-hours of operation at fixed working conditions (e.g. 1000 devices for 1 million hours, or 1 million devices for 1000 hours each, at 40 °C and 0.5 UR). For other conditions of applied voltage, current load, temperature, mechanical influences and humidity the FIT can recalculated with terms standardized for industrial or military contexts.
Additional information
Soldering
Capacitors may experience changes to electrical parameters due to environmental influences like soldering, mechanical stress factors (vibration, shock) and humidity. The greatest stress factor is soldering. The heat of the solder bath, especially for SMD capacitors, can cause ceramic capacitors to change contact resistance between terminals and electrodes; in film capacitors, the film may shrink, and in wet electrolytic capacitors the electrolyte may boil. A recovery period enables characteristics to stabilize after soldering; some types may require up to 24 hours. Some properties may change irreversibly by a few per cent from soldering.
Electrolytic behavior from storage or disuse
Electrolytic capacitors with non-solid electrolyte are "aged" during manufacturing by applying rated voltage at high temperature for a sufficient time to repair all cracks and weaknesses that may have occurred during production. Some electrolytes with a high water content react quite aggressively or even violently with unprotected aluminum. This leads to a "storage" or "disuse" problem of electrolytic capacitors manufactured before the 1980s. Chemical processes weaken the oxide layer when these capacitors are not used for too long, leading to failure or poor performance such as excessive leakage. New electrolytes with "inhibitors" or "passivators" were developed during the 1980s to lessen this problem.
"Pre-conditioning" may be recommended for electrolytic capacitors with non-solid electrolyte, even those manufactured recently, that have not been in use for an extended period. In pre-conditioning a voltage is applied across the capacitor and a deliberately limited current is passed through the capacitor. Sending a limited current through the capacitor repairs oxide layers damaged during the period of disuse. The applied voltage is lower than or equal to the capacitor's rated voltage. Current may be limited using, for instance, a series resistor. Pre-conditioning is stopped once leakage current is below some acceptable level at the desired voltage. As of 2015 one manufacturer indicates that pre-conditioning may be usefully carried out for capacitors with non-solid electrolytes that have been in storage for more than 1 to 10 years, the maximum storage time depending on capacitor type.
IEC/EN standards
The tests and requirements to be met by capacitors for use in electronic equipment for approval as standardized types are set out in the generic specification IEC/EN 60384–1 in the following sections.
Generic specification
IEC/EN 60384-1 - Fixed capacitors for use in electronic equipment
Ceramic capacitors
IEC/EN 60384-8—Fixed capacitors of ceramic dielectric, Class 1
IEC/EN 60384-9—Fixed capacitors of ceramic dielectric, Class 2
IEC/EN 60384-21—Fixed surface mount multilayer capacitors of ceramic dielectric, Class 1
IEC/EN 60384-22—Fixed surface mount multilayer capacitors of ceramic dielectric, Class 2
Film capacitors
IEC/EN 60384-2—Fixed metallized polyethylene-terephthalate film dielectric d.c. capacitors
IEC/EN 60384-11—Fixed polyethylene-terephthalate film dielectric metal foil d.c. capacitors
IEC/EN 60384-13—Fixed polypropylene film dielectric metal foil d.c. capacitors
IEC/EN 60384-16—Fixed metallized polypropylene film dielectric d.c. capacitors
IEC/EN 60384-17—Fixed metallized polypropylene film dielectric a.c. and pulse
IEC/EN 60384-19—Fixed metallized polyethylene-terephthalate film dielectric surface mount d.c. capacitors
IEC/EN 60384-20—Fixed metallized polyphenylene sulfide film dielectric surface mount d.c. capacitors
IEC/EN 60384-23—Fixed metallized polyethylene naphthalate film dielectric chip d.c. capacitors
Electrolytic capacitors
IEC/EN 60384-3—Surface mount fixed tantalum electrolytic capacitors with manganese dioxide solid electrolyte
IEC/EN 60384-4—Aluminium electrolytic capacitors with solid (MnO2) and non-solid electrolyte
IEC/EN 60384-15—fixed tantalum capacitors with non-solid and solid electrolyte
IEC/EN 60384-18—Fixed aluminium electrolytic surface mount capacitors with solid (MnO2) and non-solid electrolyte
IEC/EN 60384-24—Surface mount fixed tantalum electrolytic capacitors with conductive polymer solid electrolyte
IEC/EN 60384-25—Surface mount fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
IEC/EN 60384-26-Fixed aluminium electrolytic capacitors with conductive polymer solid electrolyte
Supercapacitors
IEC/EN 62391-1—Fixed electric double-layer capacitors for use in electric and electronic equipment - Part 1: Generic specification
IEC/EN 62391-2—Fixed electric double-layer capacitors for use in electronic equipment - Part 2: Sectional specification - Electric double-layer capacitors for power application
Capacitor symbols
|- style="text-align:center;"
|
|
|
|- style="text-align:center;"
|
|
|
|- style="text-align:center;"
|
|- style="text-align:center;"
|
|
|
|
|
|
|- style="text-align:center;"
| Capacitor
| PolarizedcapacitorElectrolyticcapacitor
| Bipolarelectrolyticcapacitor
| Feedthroughcapacitor
| Trimmercapacitor
| Variablecapacitor
Markings
Imprinted
Capacitors, like most other electronic components and if enough space is available, have imprinted markings to indicate manufacturer, type, electrical and thermal characteristics, and date of manufacture. If they are large enough the capacitor is marked with:
manufacturer's name or trademark;
manufacturer's type designation;
polarity of the terminations (for polarized capacitors)
rated capacitance;
tolerance on rated capacitance
rated voltage and nature of supply (AC or DC)
climatic category or rated temperature;
year and month (or week) of manufacture;
certification marks of safety standards (for safety EMI/RFI suppression capacitors)
Polarized capacitors have polarity markings, usually "−" (minus) sign on the side of the negative electrode for electrolytic capacitors or a stripe or "+" (plus) sign, see #Polarity marking. Also, the negative lead for leaded "wet" e-caps is usually shorter.
Smaller capacitors use a shorthand notation. The most commonly used format is: XYZ J/K/M VOLTS V, where XYZ represents the capacitance (calculated as XY × 10Z pF), the letters J, K or M indicate the tolerance (±5%, ±10% and ±20% respectively) and VOLTS V represents the working voltage.
Examples:
105K 330 V implies a capacitance of 10 × 105 pF = 1 μF (K = ±10%) with a working voltage of 330 V.
473M 100 V implies a capacitance of 47 × 103 pF = 47 nF (M = ±20%) with a working voltage of 100 V.
Capacitance, tolerance and date of manufacture can be indicated with a short code specified in IEC/EN 60062. Examples of short-marking of the rated capacitance (microfarads): μ47 = 0.47 μF, 4μ7 = 4.7 μF, 47μ = 47 μF
The date of manufacture is often printed in accordance with international standards.
Version 1: coding with year/week numeral code, "1208" is "2012, week number 8".
Version 2: coding with year code/month code. The year codes are: "R" = 2003, "S"= 2004, "T" = 2005, "U" = 2006, "V" = 2007, "W" = 2008, "X" = 2009, "A" = 2010, "B" = 2011, "C" = 2012, "D" = 2013, etc. Month codes are: "1" to "9" = Jan. to Sept., "O" = October, "N" = November, "D" = December. "X5" is then "2009, May"
For very small capacitors like MLCC chips no marking is possible. Here only the traceability of the manufacturers can ensure the identification of a type.
Colour coding
Capacitors do not use color coding.
Polarity marking
Aluminum e-caps with non-solid electrolyte have a polarity marking at the cathode (minus) side. Aluminum, tantalum, and niobium e-caps with solid electrolyte have a polarity marking at the anode (plus) side. Supercapacitors are marked at the minus side.
Market segments
Discrete capacitors today are industrial products produced in very large quantities for use in electronic and in electrical equipment. Globally, the market for fixed capacitors was estimated at US$18 billion in 2008 for 1,400 billion (1.4 × 1012) pieces. This market is dominated by ceramic capacitors with estimate of approximately one trillion (1 × 1012) items per year.
Detailed estimated figures in value for the main capacitor families are:
Ceramic capacitors—US$8.3 billion (46%);
Aluminum electrolytic capacitors—US$3.9 billion (22%);
Film capacitors and Paper capacitors—US$2.6 billion, (15%);
Tantalum electrolytic capacitors—US$2.2 billion (12%);
Super capacitors (Double-layer capacitors)—US$0.3 billion (2%); and
Others like silver mica and vacuum capacitors—US$0.7 billion (3%).
All other capacitor types are negligible in terms of value and quantity compared with the above types.
See also
Circuit design
Decoupling capacitor
List of capacitor manufacturers
References
External links
Spark Museum (von Kleist and Musschenbroek)
Modeling Dielectric Absorption in Capacitors
A different view of all this capacitor stuff
Images of different types of capacitors
Overview of different capacitor types
Capsite 2015 Introduction to capacitors
da:Elektrisk kondensator
et:Elektrikondensaator
he:קבל
ja:コンデンサ
ru:Электрический конденсатор
ta:மின் தேக்கி | Capacitor types | [
"Physics"
] | 15,768 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
2,908,847 | https://en.wikipedia.org/wiki/Chemical%20clock | A chemical clock (or clock reaction) is a complex mixture of reacting chemical compounds in which the onset of an observable property (discoloration or coloration) occurs after a predictable induction time due to the presence of clock species at a detectable amount.
In cases where one of the reagents has a visible color, crossing a concentration threshold can lead to an abrupt color change after a reproducible time lapse.
Types
Clock reactions may be classified into three or four types:
Substrate-depletive clock reaction
The simplest clock reaction featuring two reactions:
A → C (rate k1)
B + C → products (rate k2, fast)
When substrate (B) is present, the clock species (C) is quickly consumed in the second reaction. Only when substrate B is all used up or depleted, species C can build up in amount causing the color to change. An example for this clock reaction is the sulfite/iodate reaction or iodine clock reaction, also known as Landolt's reaction.
Sometimes, a clock reaction involves the production of intermediate species in three consecutive reactions.
P + Q → R
R + Q → C
P + C → 2R
Given that Q is in excess, when substrate (P) is depleted, C builds up resulting in the change in color.
Autocatalysis-driven clock reaction
The basis of the reaction is similar to substrate-depletive clock reaction, except for the fact that rate k2 is very slow leading to the co-existing of substrates and clock species, so there is no need for substrate to be depleted to observe the change in color. The example for this clock is pentathionate/iodate reaction.
Pseudoclock behavior
The reactions in this category behave like a clock reaction, however they are irreproducible, unpredictable and hard to control. Examples are chlorite/thiosulfate and iodide/chlorite reactions.
Crazy clock reaction
The reaction is irreproducible in each run due to the initial inhomogeneity of the mixture which result from variation in stirring rate, overall volume as well as geometry of the reactors. Repeating the reaction in the statistically meaningful manners leads to the reproducible cumulative probability distribution curve. The example for this clock is iodate/arsenous acid reaction.
One reaction may fall into more than one classification above depending on the circumstance. For example, iodate−arsenous acid reaction can be substrate-depletive clock reaction, autocatalysis-driven clock reaction and crazy clock reaction.
Examples
One class of example is the iodine clock reactions, in which an iodine species is mixed with redox reagents in the presence of starch. After a delay, a dark blue color suddenly appears due to the formation of a triiodide-starch complex.
Additional reagents can be added to some chemical clocks to build a chemical oscillator. For example, the Briggs–Rauscher reaction is derived from an iodine clock reaction by adding perchloric acid, malonic acid and manganese sulfate.
See also
Circadian clock
Chemical oscillator
References
Chemical kinetics
Clocks
Non-equilibrium thermodynamics
Oscillation
Articles containing video clips | Chemical clock | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 668 | [
"Machines",
"Chemical reaction engineering",
"Clock reactions",
"Non-equilibrium thermodynamics",
"Clocks",
"Measuring instruments",
"Physical systems",
"Mechanics",
"Oscillation",
"Chemical kinetics",
"Dynamical systems"
] |
2,909,308 | https://en.wikipedia.org/wiki/Pressure%20gradient | In hydrodynamics and hydrostatics, the pressure gradient (typically of air but more generally of any fluid) is a physical quantity that describes in which direction and at what rate the pressure increases the most rapidly around a particular location. The pressure gradient is a dimensional quantity expressed in units of pascals per metre (Pa/m). Mathematically, it is the gradient of pressure as a function of position. The gradient of pressure in hydrostatics is equal to the body force density (generalised Stevin's Law).
In petroleum geology and the petrochemical sciences pertaining to oil wells, and more specifically within hydrostatics, pressure gradients refer to the gradient of vertical pressure in a column of fluid within a wellbore and are generally expressed in pounds per square inch per foot (psi/ft). This column of fluid is subject to the compound pressure gradient of the overlying fluids. The path and geometry of the column is totally irrelevant; only the vertical depth of the column has any relevance to the vertical pressure of any point within its column and the pressure gradient for any given true vertical depth.
Physical interpretation
The concept of a pressure gradient is a local characterisation of the air (more generally of the fluid under investigation). The pressure gradient is defined only at these spatial scales at which pressure (more generally fluid dynamics) itself is defined.
Within planetary atmospheres (including the Earth's), the pressure gradient is a vector pointing roughly downwards, because the pressure changes most rapidly vertically, increasing downwards (see vertical pressure variation). The value of the strength (or norm) of the pressure gradient in the troposphere is typically of the order of 9 Pa/m (or 90 hPa/km).
The pressure gradient often has a small but critical horizontal component, which is largely responsible for wind circulation in the atmosphere. The horizontal pressure gradient is a two-dimensional vector resulting from the projection of the pressure gradient onto a local horizontal plane. Near the Earth's surface, this horizontal pressure gradient force is directed from higher toward lower pressure. Its particular orientation at any one time and place depends strongly on the weather situation. At mid-latitudes, the typical horizontal pressure gradient may take on values of the order of 10−2 Pa/m (or 10 Pa/km), although rather higher values occur within meteorological fronts.
Weather and climate relevance
Interpreting differences in air pressure between different locations is a fundamental component of many meteorological and climatological disciplines, including weather forecasting. As indicated above, the pressure gradient constitutes one of the main forces acting on the air to make it move as wind. Note that the pressure gradient force points from high towards low pressure zones. It is thus oriented in the opposite direction from the pressure gradient itself.
In acoustics
In acoustics, the pressure gradient is proportional to the sound particle acceleration according to Euler's equation. Sound waves and shock waves can induce very large pressure gradients, but these are oscillatory, and often transitory disturbances.
See also
Adverse pressure gradient
Force density
Isobar
Geopotential height
Geostrophic wind
Primitive equations
Temperature gradient
References
Conner A. Perrine (1967) The nature and theory of the general circulation of atmosphere, World Meteorological Organization, Publication No. 218, Geneva, Switzerland.
Robert G. Fleagle and Joost A. Businger (1980) An Introduction to Atmospheric Physics, Second Edition, Academic Press, International Geophysics Series, Volume 25, .
John S. Wallace and Peter V. Hobbs (2006) Atmospheric Science: An Introductory Survey, Second Edition, Academic Press, International Geophysics Series, .
External links
IPCC Third Assessment Report
Atmospheric dynamics
Pressure
Spatial gradient | Pressure gradient | [
"Physics",
"Chemistry"
] | 748 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Atmospheric dynamics",
"Pressure",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
2,909,744 | https://en.wikipedia.org/wiki/Radiolysis | Radiolysis is the dissociation of molecules by ionizing radiation. It is the cleavage of one or several chemical bonds resulting from exposure to high-energy flux. The radiation in this context is associated with ionizing radiation; radiolysis is therefore distinguished from, for example, photolysis of the Cl2 molecule into two Cl-radicals, where (ultraviolet or visible spectrum) light is used.
The chemistry of concentrated solutions under ionizing radiation is extremely complex. Radiolysis can locally modify redox conditions, and therefore the speciation and the solubility of the compounds.
Water decomposition
Of all the radiation-based chemical reactions that have been studied, the most important is the decomposition of water. When exposed to radiation, water undergoes a breakdown sequence into hydrogen peroxide, hydrogen radicals, and assorted oxygen compounds, such as ozone, which when converted back into oxygen releases great amounts of energy. Some of these are explosive. This decomposition is produced mainly by alpha particles, which can be entirely absorbed by very thin layers of water.
Summarizing, the radiolysis of water can be written as:
H2O \; ->[\text{Ionizing radiation}] \; e^{-}_{aq}, HO*, H*, HO2*, H3O^+, OH^-, H2O2, H2
Applications
Corrosion prediction and prevention in nuclear power plants
It is believed that the enhanced concentration of hydroxyl present in irradiated water in the inner coolant loops of a light-water reactor must be taken into account when designing nuclear power plants, to prevent coolant loss resulting from corrosion.
Hydrogen production
The current interest in nontraditional methods for the generation of hydrogen has prompted a revisit of radiolytic splitting of water, where the interaction of various types of ionizing radiation (α, β, and γ) with water produces molecular hydrogen. This reevaluation was further prompted by the current availability of large amounts of radiation sources contained in the fuel discharged from nuclear reactors. This spent fuel is usually stored in water pools, awaiting permanent disposal or reprocessing. The yield of hydrogen resulting from the irradiation of water with β and γ radiation is low (G-values = <1 molecule per 100 electronvolts of absorbed energy) but this is largely due to the rapid reassociation of the species arising during the initial radiolysis. If impurities are present or if physical conditions are created that prevent the establishment of a chemical equilibrium, the net production of hydrogen can be greatly enhanced.
Another approach uses radioactive waste as an energy source for regeneration of spent fuel by converting sodium borate into sodium borohydride. By applying the proper combination of controls, stable borohydride compounds may be produced and used as hydrogen fuel storage medium.
A study conducted in 1976 found an order-of-magnitude estimate can be made of the average hydrogen production rate that could be obtained by utilizing the energy liberated via radioactive decay. Based on the primary molecular hydrogen yield of 0.45 molecules/100 eV, it would be possible to obtain 10 tons per day. Hydrogen production rates in this range are not insignificant, but are small compared with the average daily usage (1972) of hydrogen in the U.S. of about 2 x 10^4 tons. Addition of a hydrogen-atom donor could increase this about a factor of six. It was shown that the addition of a hydrogen-atom donor such as formic acid enhances the G value for hydrogen to about 2.4 molecules per 100 eV absorbed. The same study concluded that designing such a facility would likely be too unsafe to be feasible.
Spent nuclear fuel
Gas generation by radiolytic decomposition of hydrogen-containing materials has been an area of concern for the transport and storage of radioactive materials and waste for a number of years. Potentially combustible and corrosive gases can be generated while at the same time, chemical reactions can remove hydrogen, and these reactions can be enhanced by the presence of radiation. The balance between these competing reactions is not well known at this time.
Radiation therapy
When radiation enters the body, it will interact with the atoms and molecules of the cells (mainly made of water) to produce free radicals and molecules that are able to diffuse far enough to reach the critical target in the cell, the DNA, and damage it indirectly through some chemical reaction. This is the main damage mechanism for photons as they are used for example in external beam radiation therapy.
Typically, the radiolytic events that lead to the damage of the (tumor)-cell DNA are subdivided into different stages that take place on different time scales:
The physical stage (), consists in the energy deposition by the ionizing particle and the consequent ionization of water.
During the physico-chemical stage () numerous processes occur, e.g. the ionized water molecules may split into a hydroxyl radical and a hydrogen molecule or free electrons may undergo solvation.
During the chemical stage (), the first products of radiolysis react with each other and with their surrounding, thus producing several reactive oxygen species which are able to diffuse.
During the bio-chemical stage ( to days) these reactive oxygen species might break the chemical bonds of the DNA, thus triggering the response of enzymes, the immune-system, etc.
Finally, during the biological stage (days up to years) the chemical damage may translate into biological cell death or oncogenesis when the damaged cells attempt to divide.
Earth's history
A suggestion has been made that in the early stages of the Earth's development when its radioactivity was almost two orders of magnitude higher than at present, radiolysis could have been the principal source of atmospheric oxygen, which ensured the conditions for the origin and development of life. Molecular hydrogen and oxidants produced by the radiolysis of water may also provide a continuous source of energy to subsurface microbial communities (Pedersen, 1999). Such speculation is supported by a discovery in the Mponeng Gold Mine in South Africa, where the researchers found a community dominated by a new phylotype of Desulfotomaculum, feeding on primarily radiolytically produced H2.
Methods
Pulse radiolysis
Pulse radiolysis is a recent method of initiating fast reactions to study reactions occurring on a timescale faster than approximately one hundred microseconds, when simple mixing of reagents is too slow and other methods of initiating reactions have to be used.
The technique involves exposing a sample of material to a beam of highly accelerated electrons, where the beam is generated by a linac. It has many applications. It was developed in the late 1950s and early 1960s by John Keene in Manchester and Jack W. Boag in London.
Flash photolysis
Flash photolysis is an alternative to pulse radiolysis that uses high-power light pulses (e.g. from an excimer laser) rather than beams of electrons to initiate chemical reactions. Typically ultraviolet light is used which requires less radiation shielding than required for the X-rays emitted in pulse radiolysis.
See also
Radiation chemistry
References
External links
Traité de radioactivité, par Marie Skodowska Curie, published by Gauthier in Paris, 1910.
Precursor and Transient Species in Condensed Phase Radiolysis
Radiolysis for Borate Regeneration
Water Radiolysis, a Possible Source of Atmospheric Oxygen
The Dissociation of Water by Radiant Energy
Resolution of Gas Generation Issues in Packages Containing Radioactive Waste/Materials
Pulse radiolysis
What is pulse Radiolysis
The Formation and Detection of Intermediates in Water Radiolysis, Radiation Research Supplement, Vol. 4, Basic Mechanisms in the Radiation Chemistry of Aqueous Media. Proceedings of a Conference Sponsored by the National Academy of Sciences -- National Research Council of the United States, Gatlinburg, Tennessee, May 9-10, 1963 (1964), pp. 1-23
Nuclear technology
Chemical reactions
Photochemistry
Radiation effects | Radiolysis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,609 | [
"Physical phenomena",
"Materials science",
"Nuclear technology",
"Radiation",
"Condensed matter physics",
"nan",
"Nuclear physics",
"Radiation effects"
] |
6,898,473 | https://en.wikipedia.org/wiki/X-ray%20standing%20waves | The X-ray standing wave (XSW) technique can be used to study the structure of surfaces and interfaces with high spatial resolution and chemical selectivity. Pioneered by B.W. Batterman in the 1960s, the availability of synchrotron light has stimulated the application of this interferometric technique to a wide range of problems in surface science.
Basic principles
An X-ray standing wave (XSW) field is created by interference between an X-ray beam impinging on a sample and a reflected beam. The reflection may be generated at the Bragg condition for a crystal lattice or an engineered multilayer superlattice; in these cases, the period of the XSW equals the periodicity of the reflecting planes. X-ray reflectivity from a mirror surface at small incidence angles may also be used to generate long-period XSWs.
The spatial modulation of the XSW field, described by the dynamical theory of X-ray diffraction, undergoes a pronounced change when the sample is scanned through the Bragg condition. Due to a relative phase variation between the incoming and reflected beams, the nodal planes of the XSW field shift by half the XSW period. Depending on the position of the atoms within this wave field, the measured element-specific absorption of X-rays varies in a characteristic way. Therefore, measurement of the absorption (via X-ray fluorescence or photoelectron yield) can reveal the position of the atoms relative to the reflecting planes. The absorbing atoms can be thought of as "detecting" the phase of the XSW; thus, this method overcomes the phase problem of X-ray crystallography.
For quantitative analysis, the normalized fluorescence or photoelectron yield is described by
,
where is the reflectivity and is the relative phase of the interfering beams. The characteristic shape of can be used to derive precise structural information about the surface atoms because the two parameters (coherent fraction) and (coherent position) are directly related to the Fourier representation of the atomic distribution function. Therefore, with a sufficiently large number of Fourier components being measured, XSW data can be used to establish the distribution of the different atoms in the unit cell (XSW imaging).
Experimental considerations
XSW measurements of single crystal surfaces are performed on a diffractometer. The crystal is rocked through a Bragg diffraction condition, and the reflectivity and XSW yield are simultaneously measured. XSW yield is usually detected as X-ray fluorescence (XRF). XRF detection enables in situ measurements of interfaces between a surface and gas or liquid environments, since hard X-rays can penetrate these media. While XRF gives an element-specific XSW yield, it is not sensitive to the chemical state of the absorbing atom. Chemical state sensitivity is achieved using photoelectron detection, which requires ultra-high vacuum instrumentation.
Measurements of atomic positions at or near single crystal surfaces require substrates of very high crystal quality. The intrinsic width of a Bragg reflection, as calculated by dynamical diffraction theory, is extremely small (on the order of 0.001° under conventional X-ray diffraction conditions). Crystal defects such as mosaicity can substantially broaden the measured reflectivity, which obscures the modulations in the XSW yield needed to locate the absorbing atom. For defect-rich substrates such as metal single crystals, a normal-incidence or back-reflection geometry is used. In this geometry, the intrinsic width of the Bragg reflection is maximized. Instead of rocking the crystal in space, the energy of the incident beam is tuned through the Bragg condition. Since this geometry requires soft incident X-rays, this geometry typically uses XPS detection of the XSW yield.
Selected applications
Applications which require ultra-high vacuum conditions:
Physisorption and chemisorption studies
Diffusion of dopants in crystals
Superlattices and Quasi-crystal characterization
Applications which do not require ultra-high vacuum conditions:
Langmuir-Blodgett films
Self-assembled monolayers
Model heterogeneous catalysts
Buried interfaces
See also
List of surface analysis methods
References
Further reading
Laboratory techniques in condensed matter physics
Experimental physics
X-ray spectroscopy
Standing | X-ray standing waves | [
"Physics",
"Chemistry",
"Materials_science"
] | 851 | [
"Spectrum (physical sciences)",
"X-rays",
"Electromagnetic spectrum",
"Laboratory techniques in condensed matter physics",
"Experimental physics",
"Condensed matter physics",
"X-ray spectroscopy",
"Spectroscopy"
] |
6,898,571 | https://en.wikipedia.org/wiki/ANNNI%20model | In statistical physics, the axial (or anisotropic) next-nearest neighbor Ising model, usually known as the ANNNI model, is a variant of the Ising model. In the ANNNI model, competing ferromagnetic and antiferromagnetic exchange interactions couple spins at nearest and next-nearest neighbor sites along one of the crystallographic axes of the lattice.
The model is a prototype for complicated spatially modulated magnetic superstructures in
crystals.
To describe experimental results on magnetic orderings in erbium, the model was introduced in 1961 by Roger Elliott from the University of Oxford. The model has given its name in 1980 by Michael E. Fisher and Walter Selke, who analysed it first by Monte Carlo methods, and then by low temperature series expansions, showing the fascinating complexity of its phase diagram, including devil's staircases and a Lifshitz point. Indeed, it provides, for two- and three-dimensional systems, a theoretical basis for understanding numerous experimental observations on commensurate and incommensurate structures, as well as accompanying phase transitions, in various magnets, alloys, adsorbates, polytypes, multiferroics, and other solids.
Further possible applications range from modeling of cerebral cortex to quantum information.
References
Statistical mechanics
Lattice models
Spin models | ANNNI model | [
"Physics",
"Materials_science"
] | 275 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics"
] |
30,631,568 | https://en.wikipedia.org/wiki/Lifting%20Operations%20and%20Lifting%20Equipment%20Regulations%201998 | The Lifting Operations and Lifting Equipment Regulations 1998 (LOLER) are set of regulations created under the Health and Safety at Work etc. Act 1974 which came into force in Great Britain on 5 December 1998 and replaced a number of other pieces of legislation which previously covered the use of lifting equipment. The purpose of the regulations was to reduce the risk of injury from lifting equipment used at work. Areas covered in the regulations include the requirement for lifting equipment to be strong and stable enough for safe use and to be marked to indicate safe working loads; ensuring that any equipment is positioned and installed so as to minimise risks; that the equipment is used safely ensuring that work is planned, organised and performed by a competent person; that equipment is subject to ongoing thorough examination and where appropriate, inspection by competent people.
Lifting equipment
The regulations define lifting equipment as "work equipment for lifting or lowering loads and includes its attachments used for anchoring, fixing or supporting it". The regulations involve anything which involves the lifting of goods or people at work. Equipment covered would include lifts, cranes, ropes, slings, hooks, shackles, eyebolts, rope and pulley systems and forklift trucks. The regulations apply to all workplaces and all the provisions of the 'Provision and Use of Work Equipment Regulations 1998' also apply to lifting equipment.
Safe working load
A safe working load (SWL) should, according to the regulations be marked onto lifting equipment with the relevant SWL being dependent on the configuration of the equipment, accessories for lifting such as eye bolts, lifting magnets and lifting beams should also be marked. The load itself would be based on the maximum load that the equipment can lift safely. Lifting equipment that is designed for lifting people must also be appropriately and clearly marked.
Passenger lifts
The regulations stated that all lifts provided for use with work activities should be thoroughly examined by a 'competent person' at regular intervals. Regulation 9 of the Lifting Operations and Lifting Equipment Regulations requires all employers to have their equipment thoroughly examined prior to it being put into service and after there has been any major alteration that could affect its operation. Owners or people responsible for the safe operation of a lift at work are known as 'dutyholders' and have a responsibility to ensure that the lift has been thoroughly examined and is safe to use. Lifts when in use should be thoroughly examined every six months if, at any time, the lift has been used to carry people. Lifts used to only carry loads should be examined every 12 months. Any substantial or significant changes should have been made to the equipment then this would also require an examination as would any change in operating condition which is likely to affect the integrity of the equipment.
LOLER Inspections
These are a legal requirement and should be carried out by a competent person. Though a "competent person" is not defined within the legislation, guidance is given in the HSE LOLER Approved Code of Practice and guidance which gives further details that the person should have the "appropriate practical and theoretical knowledge and experience of the lifting equipment" which would allow them to identify safety issues.
In practice, an insurance company may provide a competent person or request a third party independent inspector.
These inspections should be carried out at 6 monthly intervals for all lifting items and at least every 12 months for those that could be covered by PUWER, although a competent person may determine different time scales.
Standards state that as a minimum;
Every six months for lifting equipment used for lifting/lowering persons.
Every six months for lifting accessories.
Every 12 months for all other lifting equipment not falling into either of the above categories
A competent person may deem different time scale
LOLER Frequency (in months):
Employers' and workers' obligations
LOLER 1998 put in place four key protocols that all employers and workers must abide by.
All equipment must be safe and suitable for purpose. The manufacturer must identify any hazards associated with the equipment in question, they must then assess these hazards to bring them down to acceptable levels. All lifting equipment is normally put through an independent type testing process to establish that it will safely perform the tasks required to one of the below standards.
BS (British Standard, used mainly in the UK)
ISO Standards (International Standard)
EN (Euronorm, used throughout Europe)
CEN/CENELEC (Euronorm Standards)
The above standards are a published specification that establishes a common language and contains a technical specification or other precise criteria. They are designed to be used consistently as a rule, guideline or definition.
All personnel must be suitably trained. All manufacturers of lifting equipment are obliged to send out instructions for use of all products. The employer is then obliged to make sure employees are aware of these instructions and use the lifting equipment correctly. To achieve this the employees must be competent. Competence is achieved through experience, technical knowledge and training.
All equipment must be maintained in a safe condition. It is good practice for all personnel using lifting equipment to conduct a pre-use inspection on all items. Regulation 9 of LOLER also outlines specific requirements for the formal inspection of lifting equipment at mandatory intervals. These inspections are to be performed by a competent person and the findings of the inspections recorded. Maximum fixed periods for thorough examinations and inspection of lifting equipment as stated in regulation 9 of LOLER are:
Equipment used for lifting persons – 6 Months
Lifting accessories – 6 Months
Other lifting appliances – 12 Months
or in accordance with a written scheme of examination. Any inspection record must be made in line with the requirements of schedule 1 of LOLER.
The only exception to this is: If the lifting equipment has not been used before and; In the case of lifting equipment issued with an EC declaration of conformity, the employer has possession of such declaration and it is not made more than 12 months before the lifting equipment is put into service.
Record keeping
Operators of lifting equipment are legally required to ensure that reports of thorough examinations are kept available for consideration by health and safety inspectors for at least two years or until the next report, whichever is longer.
Records must be kept for all equipment. All equipment manufactured should be given a “birth certificate”. This should prove that when first made, it complied with any requirement. In Europe today, this document would normally be an EC Declaration of conformity plus a manufacturers certificate if called for by the standard worked to.
They may be kept electronically as long as you can provide a written report if requested.
To gain an understanding of the your Health and Safety requirements in the motor vehicle repair industry in full read document HSG261.
Prosecutions arising from the regulations
On 17 January 2011, a Liverpool nursing home was fined £18,000 after Frances Shannon, an 81-year-old woman fell to the ground whilst being lifted out of bed.
The Christopher Grange nursing home, run by the Catholic Blind Institute, was prosecuted by the Health and Safety Executive (HSE) for failing to carry out regular checks of the sling equipment which was used to lift Mrs Shannon, who suffered a broken shoulder and injuries to her back and elbow.
Taken to the Royal Liverpool University Hospital, Mrs Shannon died the day following the incident. Speaking of the prosecution Sarah Wadham, the HSE's inspecting officer, said that the incident could have been prevented, saying to the press "There should have been regular checks of the sling and it should have been thoroughly examined at least once every six months. Sadly this did not happen."
The Catholic Blind Institute was charged under section 9 of the regulations and ordered to also pay £13,876 costs.
Notes
References
1998 in British law
Health and safety in the United Kingdom
Lifting equipment
Safety engineering
Safety codes
Statutory instruments of the United Kingdom | Lifting Operations and Lifting Equipment Regulations 1998 | [
"Physics",
"Technology",
"Engineering"
] | 1,537 | [
"Systems engineering",
"Machines",
"Safety engineering",
"Lifting equipment",
"Physical systems"
] |
30,631,605 | https://en.wikipedia.org/wiki/Dopant%20activation | Dopant activation is the process of obtaining the desired electronic contribution from impurity species in a semiconductor host. The term is often restricted to the application of thermal energy following the ion implantation of dopants. In the most common industrial example, rapid thermal processing is applied to silicon following the ion implantation of dopants such as phosphorus, arsenic and boron. Vacancies generated at elevated temperature (1200 °C) facilitate the movement of these species from interstitial to substitutional lattice sites while amorphization damage from the implantation process recrystallizes. A relatively rapid process, peak temperature is often maintained for less than one second to minimize unwanted chemical diffusion.
References
Semiconductor properties | Dopant activation | [
"Physics",
"Materials_science"
] | 141 | [
"Semiconductor properties",
"Condensed matter physics"
] |
30,634,006 | https://en.wikipedia.org/wiki/Quantitative%20susceptibility%20mapping | Quantitative susceptibility mapping (QSM) provides a novel contrast mechanism in magnetic resonance imaging (MRI) different from traditional susceptibility weighted imaging.
The voxel intensity in QSM is linearly proportional to the underlying tissue apparent magnetic susceptibility, which is useful for chemical identification and quantification of specific biomarkers including iron, calcium, gadolinium, and super paramagnetic iron oxide (SPIO) nano-particles. QSM utilizes phase images, solves the magnetic field to susceptibility source inverse problem, and generates a three-dimensional susceptibility distribution. Due to its quantitative nature and sensitivity to certain kinds of material, potential QSM applications include standardized quantitative stratification of cerebral microbleeds and neurodegenerative disease, accurate gadolinium quantification in contrast enhanced MRI, and direct monitoring of targeted theranostic drug biodistribution in nanomedicine.
Background
In MRI, the local field induced by non-ferromagnetic biomaterial susceptibility along the main polarization B0 field is the convolution of the volume susceptibility distribution with the dipole kernel : . This spatial convolution can be expressed as a point-wise multiplication in Fourier domain: . This Fourier expression provides an efficient way to predict the field perturbation when the susceptibility distribution is known. However, the field to source inverse problem involves division by zero at a pair of cone surfaces at the magic angle with respect to B0 in the Fourier domain. Consequently, susceptibility is underdetermined at the spatial frequencies on the cone surface, which often leads to severe streaking artifacts in the reconstructed QSM.
Techniques
Data acquisition
In principle, any 3D gradient echo sequence can be used for data acquisition. In practice, high resolution imaging with a moderately long echo time is preferred to obtain sufficient susceptibility effects, although the optimal imaging parameters depend on the specific applications and the field strength. A multi-echo acquisition is beneficial for accurate B0 field measurement without the contribution from B1 inhomogeneity. Flow compensation may further improve the accuracy of susceptibility measurement in venous blood, but there are certain technical difficulties to devise a fully flow compensated multi-echo sequence.
Background field removal
In human brain quantitative susceptibility mapping, only the local susceptibility sources inside the brain are of interest. However, the magnetic field induced by the local sources is inevitably contaminated by the field induced by other sources such as main field inhomogeneity (imperfect shimming) and the air-tissue interface, whose susceptibility difference is orders of magnitudes stronger than that of the local sources. Therefore, the non-biological background field needs to be removed for clear visualization on phase images and precise quantification on QSM.
Ideally, the background field can be directly measured with a separate reference scan, where the sample of interest is replaced by a uniform phantom with the same shape while keeping the scanner shimming identical. However, for clinical application, such an approach is impossible and post-processing based methods are preferred. Traditional heuristic methods, including high-pass filtering, are useful for the background field removal, although they also tamper with the local field and degrade the quantitative accuracy.
More recent background field removal methods directly or indirectly exploit the fact that the background field is a harmonic function. Two recent methods based on physical principles, projection onto dipole fields (PDF) and sophisticated harmonic artifact reduction on phase data (SHARP), demonstrated improved contrast and higher precision on the estimated local field. Both methods model the background field as a magnetic field generated by an unknown background susceptibility distribution, and differentiate it from the local field using either the approximate orthogonality or the harmonic property. The background field can also be directly computed by solving the Laplace's equation with simplified boundary values, as demonstrated in the Laplacian boundary value (LBV) method.
Field-to-source inversion
The field-to-source inverse problem can be solved by several methods with various associated advantages and limitations.
Calculation of susceptibility through multiple orientation sampling (COSMOS)
COSMOS solves the inverse problem by oversampling from multiple orientations. COSMOS utilizes the fact that the zero cone surface in the Fourier domain is fixed at the magic angle with respect to the B0 field. Therefore, if an object is rotated with respect to the B0 field, then in the object's frame, the B0 field is rotated and thus the cone. Consequently, data that cannot be calculated due to the cone becomes available at the new orientations.
COSMOS assumes a model-free susceptibility distribution and keeps full fidelity to the measured data. This method has been validated extensively in in vitro, ex vivo and phantom experiments. Quantitative susceptibility maps obtained from in vivo human brain imaging also showed high degree of agreement with previous knowledge about brain anatomy. Three orientations are generally required for COSMOS, limiting the practicality for clinical applications. However, it may serve as a reference standard when available for calibrating other techniques.
Morphology enabled dipole inversion (MEDI)
A unique advantage of MRI is that it provides not only the phase image but also the magnitude image. In principle, the contrast change, or equivalently the edge, on a magnitude image arises from the underlying change of tissue type, which is the same cause for the change of susceptibility. This observation is translated into mathematics in MEDI, where edges in a QSM which do not exist in the corresponding magnitude image are sparsified by solving a weighted norm minimization problem.
MEDI has also been validated extensively in phantom, in vitro and ex vivo experiments. In an in vivo human brain, MEDI calculated QSM showed similar results compared to COSMOS without statistically significant difference. MEDI only requires a single angle acquisition, so it is a more practical solution to QSM.
Thresholded K-space division (TKD)
The underdetermined data in Fourier domain is only at the location of the cone and its immediate vicinity. For this region in k-space, spatial-frequencies of the dipole kernel are set to a predetermined non-zero value for the division. Investigation of more advanced strategies for recovering data in this k-space region is also a topic of ongoing research.
Thresholded k-space division only requires a single angle acquisition, and benefits from the ease of implementation as well as the fast calculation speed. However, streaking artifacts are frequently present in the QSM and the susceptibility value is underestimated compared to COSMOS calculated QSM.
Potential clinical applications
Differentiating calcification from iron
It has been confirmed in in vivo and phantom experiments that cortical bones, whose major composition is calcification, are diamagnetic compared to water. Therefore, it is possible to use this diamagnetism to differentiate calcifications from iron deposits that usually demonstrate strong paramagnetism. This may allow QSM to serve as a problem solving tool for the diagnosis of confounding hypointense findings on T2* weighted images.
Quantification of contrast agent
For exogenous susceptibility sources, the susceptibility value is theoretically linearly proportional to the concentration of the contrast agent. This provides a new way for in vivo quantification of gadolinium or SPIO concentrations.
References
Magnetic resonance imaging | Quantitative susceptibility mapping | [
"Chemistry"
] | 1,535 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
30,639,331 | https://en.wikipedia.org/wiki/Jurin%27s%20law | Jurin's law, or capillary rise, is the simplest analysis of capillary action—the induced motion of liquids in small channels—and states that the maximum height of a liquid in a capillary tube is inversely proportional to the tube's diameter. Capillary action is one of the most common fluid mechanical effects explored in the field of microfluidics. Jurin's law is named after James Jurin, who discovered it between 1718 and 1719. His quantitative law suggests that the maximum height of liquid in a capillary tube is inversely proportional to the tube's diameter. The difference in height between the surroundings of the tube and the inside, as well as the shape of the meniscus, are caused by capillary action. The mathematical expression of this law can be derived directly from hydrostatic principles and the Young–Laplace equation. Jurin's law allows the measurement of the surface tension of a liquid and can be used to derive the capillary length.
Formulation
The law is expressed as
,
where
h is the liquid height;
is the surface tension;
θ is the contact angle of the liquid on the tube wall;
ρ is the mass density (mass per unit volume);
r0 is the tube radius;
g is the gravitational acceleration.
It is only valid if the tube is cylindrical and has a radius (r0) smaller than the capillary length (). In terms of the capillary length, the law can be written as
.
Examples
For a water-filled glass tube in air at standard conditions for temperature and pressure, at 20 °C, , and . Because water spreads on clean glass, the effective equilibrium contact angle is approximately zero. For these values, the height of the water column is
Thus for a radius glass tube in lab conditions given above, the water would rise an unnoticeable . However, for a radius tube, the water would rise , and for a radius tube, the water would rise .
Capillary action is used by many plants to bring up water from the soil. For tall trees (larger than ~10 m (32 ft)), other processes like osmotic pressure and negative pressures are also important.
History
During the 15th century, Leonardo da Vinci was one of the first to propose that mountain streams could result from the rise of water through small capillary cracks.
It is later, in the 17th century, that the theories about the origin of capillary action begin to appear. Jacques Rohault erroneously supposed that the rise of the liquid in a capillary could be due to the suppression of air inside and the creation of a vacuum. The astronomer Geminiano Montanari was one of the first to compare the capillary action to the circulation of sap in plants. Additionally, the experiments of Giovanni Alfonso Borelli determined in 1670 that the height of the rise was inversely proportional to the radius of the tube.
Francis Hauksbee, in 1713, refuted the theory of Rohault through a series of experiments on capillary action, a phenomenon that was observable in air as well as in vacuum. Hauksbee also demonstrated that the liquid rise appeared on different geometries (not only circular cross sections), and on different liquids and tube materials, and showed that there was no dependence on the thickness of the tube walls. Isaac Newton reported the experiments of Hauskbee in his work Opticks but without attribution.
It was the English physiologist James Jurin, who finally in 1718 confirmed the experiments of Borelli and the law was named in his honour.
Derivation
The height of the liquid column in the tube is constrained by the hydrostatic pressure and by the surface tension. The following derivation is for a liquid that rises in the tube; for the opposite case when the liquid is below the reference level, the derivation is analogous but pressure differences may change sign.
Laplace pressure
Above the interface between the liquid and the surface, the pressure is equal to the atmospheric pressure . At the meniscus interface, due to the surface tension, there is a pressure difference of , where is the pressure on the convex side; and is known as Laplace pressure. If the tube has a circular section of radius , and the meniscus has a spherical shape, the radius of curvature is , where is the contact angle. The Laplace pressure is then calculated according to the Young-Laplace equation:where is the surface tension.
Hydrostatic pressure
Outside and far from the tube, the liquid reaches a ground level in contact with the atmosphere. Liquids in communicating vessels have the same pressures at the same heights, so a point , inside the tube, at the same liquid level as outside, would have the same pressure . Yet the pressure at this point follows a vertical pressure variation as
where is the gravitational acceleration and the density of the liquid. This equation means that the pressure at point is the pressure at the interface plus the pressure due to the weight of the liquid column of height . In this way, we can calculate the pressure at the convex interface
Result at equilibrium
The hydrostatic analysis shows that , combining this with the Laplace pressure calculation we have:solving for returns Jurin's law.
References
Fluid dynamics
Hydrology | Jurin's law | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,071 | [
"Hydrology",
"Chemical engineering",
"Environmental engineering",
"Piping",
"Fluid dynamics"
] |
30,643,318 | https://en.wikipedia.org/wiki/Modified%20active%20gas%20sampling | Modified Active Gas Sampling (MAGS) is an environmental engineering assessment technique which rapidly detects unsaturated soil source areas impacted by volatile organic compounds. The technique was developed by HSA Engineers & Scientists in Fort Myers, Florida in 2002, led by Richard Lewis, Steven Folsom, and Brian Moore. It is being used all over the United States, and has been adopted by the state of Florida in its Dry-cleaning Solvent Cleanup Program.
Process
MAGS involves the extraction and analysis of soil vapor from a piezometer screened through the unsaturated soil column for the purpose of locating unsaturated zone source material. According to the MAGS Manual, written by HSA and adopted by the Florida Department of Environmental Protection, MAGS is performed "by utilizing a typical regenerative blower fitted to a temporary soil vapor extraction well, [such that]a large volume of soil can be assessed with a limited number of samples. While lacking the resolution of traditional soil sampling methods (e.g., discrete soil sampling, low flow active gas sampling, etc.), the statistical representativeness (in the sense of sample coverage) of MAGS results versus traditional methods is much greater. Moreover, the results of the assessment provide useful transport and exposure assessment information over traditional techniques. Lastly, MAGS is effective as both an initial site assessment and remedial assessment tool, in that, MAGS directly yields data required for remedial design."
Advantages
MAGS is an alternative to discrete and composite soil sampling. MAGS, while it does not describe the sample with as much precision as the previously mentioned sampling methods, is more powerful statistically: it represents a larger area of a site which is more useful in determining the presence of a compound. Besides increasing the accuracy in identifying the presence compounds in the soil, MAGS also can quickly and accurately narrow down the location and spread of the compounds after a few trials. Once the location has been determined, more thorough and traditional soil borings can be done in the identified location, instead of sampling a whole site.
HSA found particular success using the technique at solvent-impacted sites that were showing signs of rebound after initial remediation efforts. These rebounds are commonly the result of multiple (relatively small) release areas that had not been previously discovered with discrete soil sampling. MAGS can be useful in detecting how effectively the site had been cleaned up post-remediation.
Branding
HSA Engineers & Scientists considered patenting MAGS technology, but decided to trademark MAGS instead, asking that those who use the technique credit the firm.
Recognition
In 2009, HSA was recognized by the Environmental Business Journal with a Technology Merit Award in the category of remediation for the invention of MAGS technology.
References
Environmental engineering
Soil | Modified active gas sampling | [
"Chemistry",
"Engineering"
] | 563 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
37,203,490 | https://en.wikipedia.org/wiki/Oversampled%20binary%20image%20sensor | An oversampled binary image sensor is an image sensor with non-linear response capabilities reminiscent of traditional photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. The response function of the image sensor is non-linear and similar to a logarithmic function, which makes the sensor suitable for high dynamic range imaging.
Working principle
Before the advent of digital image sensors, photography, for the most part of its history, used film to record light information. At the heart of every photographic film are a large number of light-sensitive grains of silver-halide crystals. During exposure, each micron-sized grain has a binary fate: Either it is struck by some incident photons and becomes "exposed", or it is missed by the photon bombardment and remains "unexposed". In the subsequent film development process, exposed grains, due to their altered chemical properties, are converted to silver metal, contributing to opaque spots on the film; unexposed grains are washed away in a chemical bath, leaving behind the transparent regions on the film. Thus, in essence, photographic film is a binary imaging medium, using local densities of opaque silver grains to encode the original light intensity information. Thanks to the small size and large number of these grains, one hardly notices this quantized nature of film when viewing it at a distance, observing only a continuous gray tone.
The oversampled binary image sensor is reminiscent of photographic film. Each pixel in the sensor has a binary response, giving only a one-bit quantized measurement of the local light intensity. At the start of the exposure period, all pixels are set to 0. A pixel is then set to 1 if the number of photons reaching it during the exposure is at least equal to a given threshold q. One way to build such binary sensors is to modify standard memory chip technology, where each memory bit cell is designed to be sensitive to visible light. With current CMOS technology, the level of integration of such systems can exceed 109~1010 (i.e., 1 giga to 10 giga) pixels per chip. In this case, the corresponding pixel sizes (around 50~nm ) are far below the diffraction limit of light, and thus the image sensor is oversampling the optical resolution of the light field. Intuitively, one can exploit this spatial redundancy to compensate for the information loss due to one-bit quantizations, as is classic in oversampling delta-sigma converters.
Building a binary sensor that emulates the photographic film process was first envisioned by Fossum, who coined the name digital film sensor (now referred to as a quanta image sensor). The original motivation was mainly out of technical necessity. The miniaturization of camera systems calls for the continuous shrinking of pixel sizes. At a certain point, however, the limited full-well capacity (i.e., the maximum photon-electrons a pixel can hold) of small pixels becomes a bottleneck, yielding very low signal-to-noise ratios (SNRs) and poor dynamic ranges. In contrast, a binary sensor whose pixels need to detect only a few photon-electrons around a small threshold q has much less requirement for full-well capacities, allowing pixel sizes to shrink further.
Imaging model
Lens
Consider a simplified camera model shown in Fig.1. The is the incoming light intensity field. By assuming that light intensities remain constant within a short exposure period, the field can be modeled as only a function of the spatial variable . After passing through the optical system, the original light field gets filtered by the lens, which acts like a linear system with a given impulse response. Due to imperfections (e.g., aberrations) in the lens, the impulse response, a.k.a. the point spread function (PSF) of the optical system, cannot be a Dirac delta, thus, imposing a limit on the resolution of the observable light field. However, a more fundamental physical limit is due to light diffraction. As a result, even if the lens is ideal, the PSF is still unavoidably a small blurry spot. In optics, such diffraction-limited spot is often called the Airy disk, whose radius can be computed as
where is the wavelength of the light and is the F-number of the optical system. Due to the lowpass (smoothing) nature of the PSF, the resulting has a finite spatial-resolution, i.e., it has a finite number of degrees of freedom per unit space.
Sensor
Fig.2 illustrates the binary sensor model. The denote the exposure values accumulated by the sensor pixels. Depending on the local values of , each pixel (depicted as "buckets" in the figure) collects a different number of photons hitting on its surface. is the number of photons impinging on the surface of the th pixel during an exposure period. The relation between and the photon count is stochastic. More specifically, can be modeled as realizations of a Poisson random variable, whose intensity parameter is equal to ,
As a photosensitive device, each pixel in the image sensor converts photons to electrical signals, whose amplitude is proportional to the number of photons impinging on that pixel. In a conventional sensor design, the analog electrical signals are then quantized by an A/D converter into 8 to 14 bits (usually the more bits the better). But in the binary sensor, the quantizer is 1 bit. In Fig.2, is the quantized output of the th pixel. Since the photon counts are drawn from random variables, so are the binary sensor output .
Spatial and temporal oversampling
If it is allowed to have temporal oversampling, i.e., taking multiple consecutive and independent frames without changing the total exposure time , the performance of the binary sensor is equivalent to the sensor with same number of spatial oversampling under certain condition. It means that people can make trade off between spatial oversampling and temporal oversampling. This is quite important, since technology usually gives limitation on the size of the pixels and the exposure time.
Advantages over traditional sensors
Due to the limited full-well capacity of conventional image pixel, the pixel will saturate when the light intensity is too strong. This is the reason that the dynamic range of the pixel is low. For the oversampled binary image sensor, the dynamic range is not defined for a single pixel, but a group of pixels, which makes the dynamic range high.
Reconstruction
One of the most important challenges with the use of an oversampled binary image sensor is the reconstruction of the light intensity from the binary measurement . Maximum likelihood estimation can be used for solving this problem. Fig. 4 shows the results of reconstructing the light intensity from 4096 binary images taken by single photon avalanche diodes (SPADs) camera. A better reconstruction quality with fewer temporal measurements and faster, hardware friendly implementation, can be achieved by more sophisticated algorithms.
References
Digital photography
Image sensors
Image processing
Digital signal processing
Digital electronics | Oversampled binary image sensor | [
"Engineering"
] | 1,460 | [
"Electronic engineering",
"Digital electronics"
] |
37,203,904 | https://en.wikipedia.org/wiki/Middleware%20for%20Robotic%20Applications | Middleware for Robotic Applications (MIRA) is a cross-platform, open-source software framework written in C++ that provides a middleware, several base functionalities and numerous tools for developing and testing distributed software modules. It also focuses on easy creation of complex, dynamic applications, while reusing these modules as plugins. The main purpose of MIRA is the development of robotic applications, but as it is designed to allow type safe data exchange between software modules using intra- and interprocess communication it is not limited to these kinds of applications.
MIRA is developed in a cooperation of the MetraLabs GmbH and the Ilmenau University of Technology/Neuroinformatics and Cognitive Robotics Lab. Therefore, MIRA was designed to fulfill the requirements of both commercial and educational purposes.
Features
General:
adds introspection/reflection and serialization to C++ with the usage of C++ language-constructs only (a meta-language or metacompilers are not necessary)
efficient data exchange between software modules
the used communication technique based on "channels" always allows non-blocking access to the transferred data
for the user the communication is fully transparent no matter if the software modules are located within the same process, different processes or on different machines, the underlying transport layer will choose the fasted method for data transportation automatically
beside data exchange via "channels", MIRA supports Remote Procedure Calls (RPC) and Remote Method Invokation.
MIRA is fully decentralized, hence there is no central server or central communication hub, making its communication more robust and allows its usage in multi-robot applications
Robotic Application specific:
easy configuration of software modules via configuration files
parameters of algorithms can be modified live at runtime to speed up the debugging and development process
huge amounts of robot sensor data can be recorded in Tapes for later playback, here different codecs can be used to compress the data
Platforms
MIRA supports and was successfully tested on the following platforms:
Linux – Ubuntu and derivates, OpenSuse, CentOS, Red Hat and Fedora
Windows – Microsoft Windows XP, Windows Vista, Windows 7 (32bit and 64bit)
Applications using MIRA
MIRA is used within the following applications:
Konrad and Suse - Guide Robots, that guide visitors within the Zuse-Building of the Ilmenau University of Technology
Monitoring the air quality within clean rooms at Infineon Technologies using several SCITOS G5 robots
and projects:
CompanionAble - Integrated Cognitive Assistive & Domotic Companion Robotic System for Ability & Security
Robot-Era - Implementation and integration of advanced robotic systems and intelligent environments in real scenarios for the ageing population
Usability
Reflection/Serialization
class Data
{
int value;
std::map<std::string,std::list<int> > complex;
Foo* ptr;
template <typename Reflector>
void reflect(Reflector& r)
{
r.member("Value", value, "an int member");
r.member("Complex", complex, "a complex member");
r.member("Pointer", ptr, "a pointer pointer");
}
};
arbitrary complex data types can be serialized by adding a simple reflect method to the class as shown above
after these minor changes, the objects of the class can be transported via inter-process communication, can be used as parameters in configuration files for software modules, can be recorded in "Tape" files, etc.
Remote Procedure Calls
class MyClass
{
int compute(const std::list<float>& values);
template <typename Reflector>
void reflect(Reflector& r)
{
r.method("compute", &MyClass::compute, this, "comment");
}
};
arbitrary methods can be turned into RPC methods by adding one line of code within the reflect() method. There is no need to write wrappers around the methods or to use meta description languages.
References
External links
MIRA Website
MIRA Documentation
MIRA Questions & Answers
Robotics software
Robotics suites
2012 software
2012 in robotics | Middleware for Robotic Applications | [
"Engineering"
] | 832 | [
"Robotics software",
"Robotics engineering"
] |
37,210,623 | https://en.wikipedia.org/wiki/Incentive-centered%20design | Incentive-centered design (ICD) is the science of designing a system or institution according to the alignment of individual and user incentives with the goals of the system. Using incentive-centered design, system designers can observe systematic and predictable tendencies in users in response to motivators to provide or manage incentives to induce a greater amount and more valuable participation. ICD is often considered when designing a system to induce desirable behaviors from users, such as participation and cooperation. It draws from principles in various areas such as economics, psychology, sociology, design, and engineering. ICD has been gaining attention in research communities due to the role it can play in helping systems benefit their users and ultimately achieve better results.
History
In 1996, the Nobel Prize in Economics was awarded to William Vickrey and James Mirrlees for their work in "The economic theory of incentives under asymmetric information", which was a core issue addressed by the theory of mechanism design. The theory of mechanism design was an antecedent to incentive-centered design, and on October 15, 2007, Roger Myerson, Leonid Hurwicz and Eric Maskin received the Nobel Prize in Economics from the Royal Swedish Academy of Sciences for their contributions to that theory. Leonid Hurwicz was the founder of the theory of mechanism design, which is a branch in economics that deals with game theory. In mechanism design, designers try to satisfy design goals in specific sets of games by setting outcome functions and message space of the game. The idea of designing "mechanisms", or sets of institutional participation rules, in order to achieve the designer's goals for a system, is a core concept for ICD.
In 2001, the STIET program (Socio-Technical Infrastructure for Electronic Transactions) received a grant to fund for doctoral fellowships and a multidisciplinary program for the University of Michigan. The program aimed to train, research, and outreach to modern information systems through an incentive-centered design approach. The participants in the program composed of doctoral students and also faculty members of the universities. In 2004, Paul Resnick, one of the four faculty members in the STIET research group of Michigan coined the phrase "Incentive-centered Design" to describe the type of work they did. In 2007, Wayne State University joined the University of Michigan to focus the program on Incentive-centered Design, and the STIET program received a five-year renewal grant that allowed for research in incentive-centered design.
From 2010 to 2015, approximately fifty British academics engaged in the ORCHID project, with one of their chief aims being to elaborate the principles of ICD (referred to as "incentive engineering").
Interdisciplinary concepts
Incentive-centered design branches from various areas and can be applied to a multitude of systems and concepts. It is closely related to user-centered design in that it takes user's wants, needs, and limitations during the design process for a product. Additionally, ICD is connected to human–computer interaction since it involves the conjunction of humans and machines and how the two can mend together well. In particular, ICD blends together the goals of the user and the goals of the system so that the user can have a pleasant and valuable experience while using the system, and the system can give the user what they need and ultimately become more aware and responsive to varying needs. ICD also borrows from the Theory of Incentives. Conflicting objectives and decentralized information are two of the main components of the Theory of Incentives. ICD works to understand the objectives of the user and the system and combine and process information so that both parties obtain optimal results.
Information security
Information security is the concept of protecting information and information systems from unauthorized access and use. Incentive-centered design can assist in bringing into consideration the errors that humans can make when using a system. These errors could potentially lead to weaknesses in the system that can be taken advantage of by attackers. With ICD, a system can guide a user into providing appropriate and adequate information to prevent system weaknesses. A simple example would be the generation of passwords. By providing users tips, motivation and feedback on the passwords that they choose, systems can ensure that user accounts have a significantly decreased chance of getting attacked.
User-generated content
User-generated content in simple terms, refers to media content that are created by users that are made publicly available on the internet. Incentives for users to contribute to user-generated content would be receiving recognition for their work, connecting with others, and self-expression. Examples would include users uploading their own videos on the YouTube platform, posting reviews on a website, etc. User-generated content has three requirements. One is the publication requirement, and the second is the creative effort requirement - users must add their own original creative effort and value into their work. The final requirement is that the creation is outside of professional routines and practices - most user-generated content is non-professional and have no relation with anything institutional or commercial.
Reputation systems
Everything has a reputation - goods, services, companies, service providers, etc., and by basing on the collection of opinions of other entities on those things, the reputation system uses an algorithm to generate reputation scores for those things. Reputation systems are similar to recommendation systems - purchasing decisions of goods and services are influenced by the reputation scores of those goods and services, and goods with high reputation scores will attract more buyers. Examples would include Amazon and eBay, where customers who purchase the item are able to rate and review the quality of the product. The cumulative ratings would be displayed for the product, indicating its quality and popularity. A relation to incentive-centered design would be that if sellers on eBay have a high reputation, then other users would be inclined to buy from them or if the item itself has high ratings, users will be more likely to go for that item.
Social computing
Social computing concerns the intertwining of computational systems and social behavior. Social computing entails a high level of community formation, user content creation, and collective action. Peer-to-peer networks, open source communities, and wikis are all examples of forms of social computing. In such areas, incentives are provided in the form of respect and recognition for users who provide high quality content and contributions. As a result of these contributions, the system overall becomes of higher quality.
Recommender systems
Recommender systems attempt to predict the 'rating' or 'preference' a user would have of a particular item based on either attributes of the item or attributes of the social network related to the user. These recommendation systems can be seen in places such as social networks suggesting new friends, shopping sites recommending related clothing, etc. The Netflix Recommender System is designed to incentivize participation from the user, aligning the systems interests with the users interests. The users want to find content they are interested in, and Netflix wants to be able to provide better recommendations to their users. The star system that Netflix provides allows for the both parties to benefit.
Online auction design
An online auction is essentially an auction on the internet. Different formats range from descending auctions to sealed-bid auctions. A huge variety of goods and services can be sold in online auctions, and there are hundreds of different websites that are all for online auctions. A well-known example would be eBay, where users on the site can sell their own personal items for others to buy. In relation to incentive-centered design, sites such as eBay allow users to rate the product the purchased. Sellers and goods that have large numbers of high ratings will attract more buyers compared with unreliable sellers and poor quality goods for sale in the auction.
Current research
Current research is being conducted by the University of Michigan and Wayne State University through their STIET program. The program has made significant contributions to the field of incentive-centered design, and a lot of the research involves game theory models, strategic interaction, and rational decision making.
For example, on July 10, 2008, Rahul Sami and Stanko Dimitrov researched bluffing in prediction markets (In the prediction market, participants bet on the outcome of the market).
Another would be in July 2009, Michael Wellman and Patrick Jordan both designed the Ad Auction game, and they both developed the strategies and trading interfaces for the game as well.
In 2010, Robert Reynolds and Leonard Kinniard-Heether worked on to train a neural network controller to play the video game Super Mario through the use of the Cultural Algorithm Toolkit system (CAT 3.0).
Practical applications
This section is about the practical/current applications of incentive-centered design. It includes examples and applications of the technique in existing products/systems.
Nike+iPod Sports Kit
This sports kit from Nike (Nike+iPod) comes with a receiver that attaches to your iPod or iPhone and a transmitter that is placed in the sole of the shoe. The kit is a running measurement kit – it measures the time of the user's workout, the distance run, the amount of calories burned, speed, etc. The example of user-incentive design in this system is that when users reach milestones in goal-oriented workouts or achieve personal records, there will be pre-recorded audio feedback from famous sports athletes acknowledging the achievement and also congratulating the user.
Ford Hybrid Car
In the 2010 Ford Fusion and Mercury Milan hybrid sedans, the instrumental panels are designed so that the screen shows the cloudy sky and the grass. When drivers are driving in a fuel-efficient manner, then the green leaves that appear on the panels will multiply accordingly. This goal-oriented display could motivate the driver to motor in a more fuel-efficient manner.
Class grading system at Indiana University
Inspired by games such as World of Warcraft, professors at Indiana University changed their course grading system so that it appears to be like a quest in a video game. Students will start off with 0 experience points, and class requirements such as homework, class attendance, exams, and projects are turned into "quests", "fighting monsters", "crafting", and "joining a guild". Lee Sheldon, a university course coordinator, found that student interest and performance increased after such change in the college coursework grading system.
Achievements
In Xbox 360 games, players can unlock Xbox achievements throughout the game. Each achievement is different and requires and challenges the user to complete a certain task. While the Xbox 360 was the first to use achievements, other platforms offer a comparable incentive system as well, including trophies on PlayStation 3 and achievements on Steam.
See also
Gamification
References
Behavioral economics
Mechanism design | Incentive-centered design | [
"Mathematics",
"Biology"
] | 2,143 | [
"Behavioral economics",
"Behavior",
"Game theory",
"Behaviorism",
"Mechanism design"
] |
37,211,506 | https://en.wikipedia.org/wiki/Elliott%20County%20Kimberlite | The Elliott County Kimberlite (Sometimes called the Ison Creek Kimberlite) was discovered in Elliott County, Kentucky, by Albert R. Crandall in 1884 over two years before Carvill Lewis named a similar porphyritic peridotite occurring near Kimberley, South Africa, a kimberlite. It occurs as three separate elongate intrusive bodies 1/4 to 1/2 mi in length and a few hundred feet in width, within an area of about a square mile. The rock is a dark-green peridotite (kimberlite) composed of serpentinized olivine and a number of accessory minerals, including phlogopite, pyrope, calcite, enstatite, magnesian ilmenite, and others. Xenoliths, mainly of shale, and igneous rock inclusions are abundant in the three intrusive bodies as described by William Brown in 1977. Detailed petrographic descriptions of the peridotite are presented by Diller in 1887 and Bolivar in 1982. The peridotite has been dated to Early Permian time by K-Ar and Rb-Sr dating of xenocrystic biotite from one of the intrusive masses, however more recent evidence points to a Cretaceous emplacement. The rock is relatively nonresistant, is commonly disintegrated to as much as 50 feet, and usually asserts no topographical expression. Unweathered rock is hard, dark greenish black and weathers to grayish olive. The saprolite is yellowish to reddish brown and strewn with garnet and ilmenite fragments and xenoliths.
Several attempts have been made to find diamonds in the kimberlite with no success.
See also
Lamproite
Lake Ellen Kimberlite
References
Breccias
Diatremes of the United States
Geology of Kentucky | Elliott County Kimberlite | [
"Materials_science"
] | 381 | [
"Breccias",
"Fracture mechanics"
] |
25,935,424 | https://en.wikipedia.org/wiki/Plane%20of%20rotation | In geometry, a plane of rotation is an abstract object used to describe or visualize rotations in space.
The main use for planes of rotation is in describing more complex rotations in four-dimensional space and higher dimensions, where they can be used to break down the rotations into simpler parts. This can be done using geometric algebra, with the planes of rotations associated with simple bivectors in the algebra.
Planes of rotation are not used much in two and three dimensions, as in two dimensions there is only one plane (so, identifying the plane of rotation is trivial and rarely done), while in three dimensions the axis of rotation serves the same purpose and is the more established approach.
Mathematically such planes can be described in a number of ways. They can be described in terms of planes and angles of rotation. They can be associated with bivectors from geometric algebra. They are related to the eigenvalues and eigenvectors of a rotation matrix. And in particular dimensions they are related to other algebraic and geometric properties, which can then be generalised to other dimensions.
Definitions
Plane
For this article, all planes are planes through the origin, that is they contain the zero vector. Such a plane in -dimensional space is a two-dimensional linear subspace of the space. It is completely specified by any two non-zero and non-parallel vectors that lie in the plane, that is by any two vectors and , such that
where is the exterior product from exterior algebra or geometric algebra (in three dimensions the cross product can be used). More precisely, the quantity is the bivector associated with the plane specified by and , and has magnitude , where is the angle between the vectors; hence the requirement that the vectors be nonzero and nonparallel.
If the bivector is written , then the condition that a point lies on the plane associated with is simply
This is true in all dimensions, and can be taken as the definition on the plane. In particular, from the properties of the exterior product it is satisfied by both and , and so by any vector of the form
with and real numbers. As and range over all real numbers, ranges over the whole plane, so this can be taken as another definition of the plane.
Plane of rotation
A plane of rotation for a particular rotation is a plane that is mapped to itself by the rotation. The plane is not fixed, but all vectors in the plane are mapped to other vectors in the same plane by the rotation. This transformation of the plane to itself is always a rotation about the origin, through an angle which is the angle of rotation for the plane.
Every rotation except for the identity rotation (with matrix the identity matrix) has at least one plane of rotation, and up to
planes of rotation, where is the dimension. The maximum number of planes up to eight dimensions is shown in this table:
{| class="wikitable" border="1"
! Dimension
| 2 || 3 || 4 || 5 || 6 || 7 || 8
|-
! Number of planes
| 1 || 1 || 2 || 2 || 3 || 3 || 4
|}
When a rotation has multiple planes of rotation they are always orthogonal to each other, with only the origin in common. This is a stronger condition than to say the planes are at right angles; it instead means that the planes have no nonzero vectors in common, and that every vector in one plane is orthogonal to every vector in the other plane. This can only happen in four or more dimensions. In two dimensions there is only one plane, while in three dimensions all planes have at least one nonzero vector in common, along their line of intersection.
In more than three dimensions planes of rotation are not always unique. For example the negative of the identity matrix in four dimensions (the central inversion),
describes a rotation in four dimensions in which every plane through the origin is a plane of rotation through an angle , so any pair of orthogonal planes generates the rotation. But for a general rotation it is at least theoretically possible to identify a unique set of orthogonal planes, in each of which points are rotated through an angle, so the set of planes and angles fully characterise the rotation.
Two dimensions
In two-dimensional space there is only one plane of rotation, the plane of the space itself. In a Cartesian coordinate system it is the Cartesian plane, in complex numbers it is the complex plane. Any rotation therefore is of the whole plane, i.e. of the space, keeping only the origin fixed. It is specified completely by the signed angle of rotation, in the range for example − to . So if the angle is the rotation in the complex plane is given by Euler's formula:
while the rotation in a Cartesian plane is given by the rotation matrix:
Three dimensions
In three-dimensional space there are an infinite number of planes of rotation, only one of which is involved in any given rotation. That is, for a general rotation there is precisely one plane which is associated with it or which the rotation takes place in. The only exception is the trivial rotation, corresponding to the identity matrix, in which no rotation takes place.
In any rotation in three dimensions there is always a fixed axis, the axis of rotation. The rotation can be described by giving this axis, with the angle through which the rotation turns about it; this is the axis angle representation of a rotation. The plane of rotation is the plane orthogonal to this axis, so the axis is a surface normal of the plane. The rotation then rotates this plane through the same angle as it rotates around the axis, that is everything in the plane rotates by the same angle about the origin.
One example is shown in the diagram, where the rotation takes place about the -axis. The plane of rotation is the -plane, so everything in that plane it kept in the plane by the rotation. This could be described by a matrix like the following, with the rotation being through an angle (about the axis or in the plane):
Another example is the Earth's rotation. The axis of rotation is the line joining the North Pole and South Pole and the plane of rotation is the plane through the equator between the Northern and Southern Hemispheres. Other examples include mechanical devices like a gyroscope or flywheel which store rotational energy in mass usually along the plane of rotation.
In any three dimensional rotation the plane of rotation is uniquely defined. Together with the angle of rotation it fully describes the rotation. Or in a continuously rotating object the rotational properties such as the rate of rotation can be described in terms of the plane of rotation. It is perpendicular to, and so is defined by and defines, an axis of rotation, so any description of a rotation in terms of a plane of rotation can be described in terms of an axis of rotation, and vice versa. But unlike the axis of rotation the plane generalises into other, in particular higher, dimensions.
Four dimensions
A general rotation in four-dimensional space has only one fixed point, the origin. Therefore an axis of rotation cannot be used in four dimensions. But planes of rotation can be used, and each non-trivial rotation in four dimensions has one or two planes of rotation.
Simple rotations
A rotation with only one plane of rotation is a simple rotation. In a simple rotation there is a fixed plane, and rotation can be said to take place about this plane, so points as they rotate do not change their distance from this plane. The plane of rotation is orthogonal to this plane, and the rotation can be said to take place in this plane.
For example the following matrix fixes the -plane: points in that plane and only in that plane are unchanged. The plane of rotation is the -plane, points in this plane are rotated through an angle . A general point rotates only in the -plane, that is it rotates around the -plane by changing only its and coordinates.
In two and three dimensions all rotations are simple, in that they have only one plane of rotation. Only in four and more dimensions are there rotations that are not simple rotations. In particular in four dimensions there are also double and isoclinic rotations.
Double rotations
In a double rotation there are two planes of rotation, no fixed planes, and the only fixed point is the origin. The rotation can be said to take place in both planes of rotation, as points in them are rotated within the planes. These planes are orthogonal, that is they have no vectors in common so every vector in one plane is at right angles to every vector in the other plane. The two rotation planes span four-dimensional space, so every point in the space can be specified by two points, one on each of the planes.
A double rotation has two angles of rotation, one for each plane of rotation. The rotation is specified by giving the two planes and two non-zero angles, and (if either angle is zero the rotation is simple). Points in the first plane rotate through , while points in the second plane rotate through . All other points rotate through an angle between and , so in a sense they together determine the amount of rotation. For a general double rotation the planes of rotation and angles are unique, and given a general rotation they can be calculated. For example a rotation of in the -plane and in the -plane is given by the matrix
Isoclinic rotations
A special case of the double rotation is when the angles are equal, that is if . This is called an isoclinic rotation, and it differs from a general double rotation in a number of ways. For example in an isoclinic rotation, all non-zero points rotate through the same angle, . Most importantly the planes of rotation are not uniquely identified. There are instead an infinite number of pairs of orthogonal planes that can be treated as planes of rotation. For example any point can be taken, and the plane it rotates in together with the plane orthogonal to it can be used as two planes of rotation.
Higher dimensions
As already noted the maximum number of planes of rotation in dimensions is
so the complexity quickly increases with more than four dimensions and categorising rotations as above becomes too complex to be practical, but some observations can be made.
Simple rotations can be identified in all dimensions, as rotations with just one plane of rotation. A simple rotation in dimensions takes place about (that is at a fixed distance from) an -dimensional subspace orthogonal to the plane of rotation.
A general rotation is not simple, and has the maximum number of planes of rotation as given above. In the general case the angles of rotations in these planes are distinct and the planes are uniquely defined. If any of the angles are the same then the planes are not unique, as in four dimensions with an isoclinic rotation.
In even dimensions () there are up to planes of rotation span the space, so a general rotation rotates all points except the origin which is the only fixed point. In odd dimensions () there are planes and angles of rotation, the same as the even dimension one lower. These do not span the space, but leave a line which does not rotate – like the axis of rotation in three dimensions, except rotations do not take place about this line but in multiple planes orthogonal to it.
Mathematical properties
The examples given above were chosen to be clear and simple examples of rotations, with planes generally parallel to the coordinate axes in three and four dimensions. But this is not generally the case: planes are not usually parallel to the axes, and the matrices cannot simply be written down. In all dimensions the rotations are fully described by the planes of rotation and their associated angles, so it is useful to be able to determine them, or at least find ways to describe them mathematically.
Reflections
Every simple rotation can be generated by two reflections. Reflections can be specified in dimensions by giving an -dimensional subspace to reflect in, so a two-dimensional reflection is in a line, a three-dimensional reflection is in a plane, and so on. But this becomes increasingly difficult to apply in higher dimensions, so it is better to use vectors instead, as follows.
A reflection in dimensions is specified by a vector perpendicular to the -dimensional subspace. To generate simple rotations only reflections that fix the origin are needed, so the vector does not have a position, just direction. It does also not matter which way it is facing: it can be replaced with its negative without changing the result. Similarly unit vectors can be used to simplify the calculations.
So the reflection in a -dimensional space is given by the unit vector perpendicular to it, , thus:
where the product is the geometric product from geometric algebra.
If is reflected in another, distinct, -dimensional space, described by a unit vector perpendicular to it, the result is
This is a simple rotation in dimensions, through twice the angle between the subspaces, which is also the angle between the vectors m and . It can be checked using geometric algebra that this is a rotation, and that it rotates all vectors as expected.
The quantity is a rotor, and is its inverse as
So the rotation can be written
where is the rotor.
The plane of rotation is the plane containing and , which must be distinct otherwise the reflections are the same and no rotation takes place. As either vector can be replaced by its negative the angle between them can always be acute, or at most . The rotation is through twice the angle between the vectors, up to or a half-turn. The sense of the rotation is to rotate from towards : the geometric product is not commutative so the product is the inverse rotation, with sense from to .
Conversely all simple rotations can be generated this way, with two reflections, by two unit vectors in the plane of rotation separated by half the desired angle of rotation. These can be composed to produce more general rotations, using up to reflections if the dimension is even, if is odd, by choosing pairs of reflections given by two vectors in each plane of rotation.
Bivectors
Bivectors are quantities from geometric algebra, clifford algebra and the exterior algebra, which generalise the idea of vectors into two dimensions. As vectors are to lines, so are bivectors to planes. So every plane (in any dimension) can be associated with a bivector, and every simple bivector is associated with a plane. This makes them a good fit for describing planes of rotation.
Every rotation plane in a rotation has a simple bivector associated with it. This is parallel to the plane and has magnitude equal to the angle of rotation in the plane. These bivectors are summed to produce a single, generally non-simple, bivector for the whole rotation. This can generate a rotor through the exponential map, which can be used to rotate an object.
Bivectors are related to rotors through the exponential map (which applied to bivectors generates rotors and rotations using De Moivre's formula). In particular given any bivector the rotor associated with it is
This is a simple rotation if the bivector is simple, a more general rotation otherwise. When squared,
it gives a rotor that rotates through twice the angle. If is simple then this is the same rotation as is generated by two reflections, as the product gives a rotation through twice the angle between the vectors. These can be equated,
from which it follows that the bivector associated with the plane of rotation containing and that rotates to is
This is a simple bivector, associated with the simple rotation described. More general rotations in four or more dimensions are associated with sums of simple bivectors, one for each plane of rotation, calculated as above.
Examples include the two rotations in four dimensions given above. The simple rotation in the -plane by an angle has bivector , a simple bivector. The double rotation by and in the -plane and -planes has bivector , the sum of two simple bivectors and which are parallel to the two planes of rotation and have magnitudes equal to the angles of rotation.
Given a rotor the bivector associated with it can be recovered by taking the logarithm of the rotor, which can then be split into simple bivectors to determine the planes of rotation, although in practice for all but the simplest of cases this may be impractical. But given the simple bivectors geometric algebra is a useful tool for studying planes of rotation using algebra like the above.
Eigenvalues and eigenplanes
The planes of rotations for a particular rotation using the eigenvalues. Given a general rotation matrix in dimensions its characteristic equation has either one (in odd dimensions) or zero (in even dimensions) real roots. The other roots are in complex conjugate pairs, exactly
such pairs. These correspond to the planes of rotation, the eigenplanes of the matrix, which can be calculated using algebraic techniques. In addition arguments of the complex roots are the magnitudes of the bivectors associated with the planes of rotations. The form of the characteristic equation is related to the planes, making it possible to relate its algebraic properties like repeated roots to the bivectors, where repeated bivector magnitudes have particular geometric interpretations.
See also
Charts on SO(3)
Givens rotation
Quaternions
Rotation group SO(3)
Rotations in 4-dimensional Euclidean space
Notes
References
Geometric algebra
Rotation in three dimensions
Rotational symmetry
Orientation (geometry)
Planes (geometry) | Plane of rotation | [
"Physics",
"Mathematics"
] | 3,581 | [
"Planes (geometry)",
"Mathematical objects",
"Infinity",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)",
"Symmetry",
"Rotational symmetry"
] |
25,941,505 | https://en.wikipedia.org/wiki/Quantum%20dynamics | In physics, quantum dynamics is the quantum version of classical dynamics. Quantum dynamics deals with the motions, and energy and momentum exchanges of systems whose behavior is governed by the laws of quantum mechanics. Quantum dynamics is relevant for burgeoning fields, such as quantum computing and atomic optics.
In mathematics, quantum dynamics is the study of the mathematics behind quantum mechanics. Specifically, as a study of dynamics, this field investigates how quantum mechanical observables change over time. Most fundamentally, this involves the study of one-parameter automorphisms of the algebra of all bounded operators on the Hilbert space of observables (which are self-adjoint operators). These dynamics were understood as early as the 1930s, after Wigner, Stone, Hahn and Hellinger worked in the field. Recently, mathematicians in the field have studied irreversible quantum mechanical systems on von Neumann algebras.
Relation to classical dynamics
Equations to describe quantum systems can be seen as equivalent to that of classical dynamics on a macroscopic scale, except for the important detail that the variables don't follow the commutative laws of multiplication. Hence, as a fundamental principle, these variables are instead described as "q-numbers", conventionally represented by operators or Hermitian matrices on a Hilbert space. Indeed, the state of the system in the atomic and subatomic scale is described not by dynamic variables with specific numerical values, but by state functions that are dependent on the c-number time. In this realm of quantum systems, the equation of motion governing dynamics heavily relies on the Hamiltonian, also known as the total energy. Therefore, to anticipate the time evolution of the system, one only needs to determine the initial condition of the state function |Ψ(t) and its first derivative with respect to time.
For example, quasi-free states and automorphisms are the Fermionic counterparts of classical Gaussian measures (Fermions' descriptors are Grassmann operators).
See also
Quantum Field Theory
Perturbation theory
Semigroups
Pseudodifferential operators
Brownian motion
Dilation theory
Quantum probability
Free probability
References
Quantum mechanics | Quantum dynamics | [
"Physics"
] | 433 | [
"Theoretical physics",
"Quantum mechanics"
] |
25,945,211 | https://en.wikipedia.org/wiki/List%20of%20Feynman%20diagrams | This is a list of common Feynman diagrams. His first published diagram appeared in Physical Review in 1949.
References
Particle physics
Physics-related lists | List of Feynman diagrams | [
"Physics"
] | 31 | [
"Particle physics"
] |
25,946,143 | https://en.wikipedia.org/wiki/Built-up%20gun | A built-up gun is artillery with a specially reinforced barrel. An inner tube of metal stretches within its elastic limit under the pressure of confined powder gases to transmit stress to outer cylinders that are under tension. Concentric metal cylinders or wire windings are assembled to minimize the weight required to resist the pressure of powder gases pushing a projectile out of the barrel. Built-up construction was the norm for guns mounted aboard 20th century dreadnoughts and contemporary railway guns, coastal artillery, and siege guns through World War II.
Background
The first built-up gun was designed by French artillery officer Alfred Thiéry in 1834 and tested not later than 1840. Also about 1840 another one was made by Daniel Treadwell, and yet another one was produced by Mersey Iron Works in Liverpool according to the John Ericsson's design. Sheffield architector John Frith received a patent on their manufacture in 1843. However, all these guns (whether made from cast iron, wrought iron or their combination) were not technologically practical before the 1850s.
In the 1850s William Armstrong serially produced his rifled breechloaders with the same technology, and built-up, but very simple Parrott rifled muzzleloaders played a significant role in the US Civil War a decade later. Blakely rifles also participated in that war, but on another side. Starting from the 1860s, built-up Krupp guns became a commercial success in Continental Europe.
Velocity and range of artillery vary directly with pressure of gunpowder or smokeless powder gases pushing the shell out of a gun barrel. A gun will deform (or explode) if chamber pressures strain the barrel beyond the elastic limit of the metal from which it is made. Thickness of homogeneous cast iron gun barrels reached a useful limit at approximately one-half caliber. Additional thickness provided little practical benefit, since higher pressures generated cracks from the bore before the outer portion of the cylinder could respond, and those cracks would extend outward during subsequent firings.
By 1870s the technology was widely adopted. Claverino's 1876 treatise on the "Resistance of Hollow Cylinders" was published in Giornale d'Artiglieria. The concept was to give exterior portions of the gun initial tension, gradually decreasing toward the interior, while giving interior parts a normal state of compression by the outer cylinders and wire windings. Theoretical maximum performance would be achieved if the inner cylinder forming the rifled bore were compressed to its elastic limit by surrounding elements while at rest before firing, and expanded to its elastic limit by internal gas pressure during firing.
Nomenclature
The innermost cylinder forming the chamber and rifled bore is called a tube or, with certain construction techniques, a liner. A second layer cylinder called the jacket extends rearward past the chamber to house the breechblock. The jacket usually extends forward through the areas of highest pressure, through the recoil slide, and may extend all the way to the muzzle. The forward part of the barrel may be tapered toward the muzzle because less strength is required for reduced pressures as the projectile approaches it. This tapered portion of barrel is called the chase. Very large guns sometimes use shorter outer cylinders called hoops when manufacturing limitations make full length jackets impractical. Hoops forward of the slide are called chase hoops. The jacket or forward chase hoop may be flared outward in the form of a bell at the muzzle for extra strength to reduce splitting because the metal at that point is not supported on the forward end. As many as four or five layers, or hoop courses, of successively tensioned cylinders have been used. Layers are designated alphabetically as the "A" tube enclosed by the "B" jacket and chase hoops, enclosed by the "C" hoop course, enclosed by the "D" hoop course, etc. Individual hoops within a course are numbered from the breech forward as the B1 jacket, the B2 chase hoop, and then the C1 jacket hoop, the C2 hoop etc. Successive hoop course joints are typically staggered and individual hoop courses use lap joints in preference to butt joints to minimize the weakness of joint locations. Cylinder diameter may be varied by including machined shoulders to prevent forward longitudinal movement of an inner cylinder within an outer cylinder during firing. Shoulder locations are similarly staggered to minimize weakness.
Assembly procedure
After the tube, jacket, and hoops have been machined to appropriate dimensions, the jacket is carefully heated to approximately 400 degrees Celsius (800 degrees Fahrenheit) in a vertical air furnace so thermal expansion allows the cool tube to be lowered into place. When the jacket is in position, it is cooled to form a tensioned shrink fit over the tube. Then the next hoop (either B2 or C1) is similarly heated so the assembled A tube and B1 jacket can be lowered into position for a successive shrink fit. The assembled unit may be machined prior to fitting a new hoop. The process continues as remaining tubes are heated sequentially and cooled onto the built-up unit until all elements have been assembled. When tensioned wire winding is used in place of a hoop course, the wire is typically covered by an outer tensioned cylinder also called a jacket.
Liners
Burning powder gases melt part of the bore each time a gun is fired. This melted metal is oxidized or blown out of the muzzle until the barrel is eroded to the extent shell dispersion becomes unacceptable. After firing several hundred shells, a gun may be reconditioned by boring out the interior and inserting a new liner as the interior cylinder. Exterior cylinders are heated as a unit to approximately 200 degrees Celsius (400 degrees Fahrenheit) to allow insertion of a new liner and the liner is bored and rifled after installation. A new liner may be bored for a different projectile diameter than used in the original gun. Liners may be either cylindrical or conical. Conical liners are tapered toward the muzzle for ease of removal from the breech end while limiting forward creep during firing. Conical liners may be removed by water cooling the liner after re-heating the barrel, but cylindrical liners must be bored out.
Monoblock guns
With the obsolescence of very large guns following World War II, metallurgical advances encouraged use of monoblock (one-piece) construction for postwar guns of medium caliber. In a procedure called autofrettage, a bored monoblock tube is filled with hydraulic fluid at pressures higher than the finished gun will experience during firing. Upon release of hydraulic pressure, the internal diameter of the monoblock tube will have been increased by approximately 6%. The outer portion of the finished monoblock rebounds to approximately its original diameter and exerts compressive forces on the inner portion similar to the separate cylinders of a built-up gun.
See also
Hoop gun
Notes
References
Fairfield, A.P., CDR, USN Naval Ordnance (1921) Lord Baltimore Press
Naval artillery
Firearm construction
Mechanical engineering | Built-up gun | [
"Physics",
"Engineering"
] | 1,398 | [
"Firearm construction",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,440,685 | https://en.wikipedia.org/wiki/Project%20621 | Project 621 was a project by the Dornier company with the intention of producing a sounding rocket with liquid-fuel propulsion. It was to use a paraglider system to return safely to earth for later re-use. It had a wet mass of 240 kg (520 lb) and a dry mass of 74 kg. The rocket itself was never launched.
The rocket of the project was supposed to reach an apogee of eighty kilometers, and then return from a height of forty kilometers, spiraling to the ground. In 1965, multiple drop tests of the paraglider system on the rocket test area in Salto di Quirra in Sardinia was performed.
The project was halted in 1966, after large problems occurred regarding the choice of a suitable wing material (textile wings could not handle the loads, while wings made of high-grade steel could not withstand the airflow without major problems).
References
astronautix.com
Sounding rockets of Germany | Project 621 | [
"Astronomy"
] | 191 | [
"Rocketry stubs",
"Astronomy stubs"
] |
1,440,776 | https://en.wikipedia.org/wiki/Adiabatic%20invariant | A property of a physical system, such as the entropy of a gas, that stays approximately constant when changes occur slowly is called an adiabatic invariant. By this it is meant that if a system is varied between two end points, as the time for the variation between the end points is increased to infinity, the variation of an adiabatic invariant between the two end points goes to zero.
In thermodynamics, an adiabatic process is a change that occurs without heat flow; it may be slow or fast. A reversible adiabatic process is an adiabatic process that occurs slowly compared to the time to reach equilibrium. In a reversible adiabatic process, the system is in equilibrium at all stages and the entropy is constant. In the 1st half of the 20th century the scientists that worked in quantum physics used the term "adiabatic" for reversible adiabatic processes and later for any gradually changing conditions which allow the system to adapt its configuration. The quantum mechanical definition is closer to the thermodynamical concept of a quasistatic process and has no direct relation with adiabatic processes in thermodynamics.
In mechanics, an adiabatic change is a slow deformation of the Hamiltonian, where the fractional rate of change of the energy is much slower than the orbital frequency. The area enclosed by the different motions in phase space are the adiabatic invariants.
In quantum mechanics, an adiabatic change is one that occurs at a rate much slower than the difference in frequency between energy eigenstates. In this case, the energy states of the system do not make transitions, so that the quantum number is an adiabatic invariant.
The old quantum theory was formulated by equating the quantum number of a system with its classical adiabatic invariant. This determined the form of the Bohr–Sommerfeld quantization rule: the quantum number is the area in phase space of the classical orbit.
Thermodynamics
In thermodynamics, adiabatic changes are those that do not increase the entropy. They occur slowly in comparison to the other characteristic timescales of the system of interest and allow heat flow only between objects at the same temperature. For isolated systems, an adiabatic change allows no heat to flow in or out.
Adiabatic expansion of an ideal gas
If a container with an ideal gas is expanded instantaneously, the temperature of the gas doesn't change at all, because none of the molecules slow down. The molecules keep their kinetic energy, but now the gas occupies a bigger volume. If the container expands slowly, however, so that the ideal gas pressure law holds at any time, gas molecules lose energy at the rate that they do work on the expanding wall. The amount of work they do is the pressure times the area of the wall times the outward displacement, which is the pressure times the change in the volume of the gas:
If no heat enters the gas, the energy in the gas molecules is decreasing by the same amount. By definition, a gas is ideal when its temperature is only a function of the internal energy per particle, not the volume. So
where is the specific heat at constant volume. When the change in energy is entirely due to work done on the wall, the change in temperature is given by
This gives a differential relationship between the changes in temperature and volume, which can be integrated to find the invariant. The constant is just a unit conversion factor, which can be set equal to one:
So
is an adiabatic invariant, which is related to the entropy
Thus entropy is an adiabatic invariant. The N log(N) term makes the entropy additive, so the entropy of two volumes of gas is the sum of the entropies of each one.
In a molecular interpretation, S is the logarithm of the phase-space volume of all gas states with energy E(T) and volume V.
For a monatomic ideal gas, this can easily be seen by writing down the energy:
The different internal motions of the gas with total energy E define a sphere, the surface of a 3N-dimensional ball with radius . The volume of the sphere is
where is the gamma function.
Since each gas molecule can be anywhere within the volume V, the volume in phase space occupied by the gas states with energy E is
Since the N gas molecules are indistinguishable, the phase-space volume is divided by , the number of permutations of N molecules.
Using Stirling's approximation for the gamma function, and ignoring factors that disappear in the logarithm after taking N large,
Since the specific heat of a monatomic gas is 3/2, this is the same as the thermodynamic formula for the entropy.
Wien's law – adiabatic expansion of a box of light
For a box of radiation, ignoring quantum mechanics, the energy of a classical field in thermal equilibrium is infinite, since equipartition demands that each field mode has an equal energy on average, and there are infinitely many modes. This is physically ridiculous, since it means that all energy leaks into high-frequency electromagnetic waves over time.
Still, without quantum mechanics, there are some things that can be said about the equilibrium distribution from thermodynamics alone, because there is still a notion of adiabatic invariance that relates boxes of different size.
When a box is slowly expanded, the frequency of the light recoiling from the wall can be computed from the Doppler shift. If the wall is not moving, the light recoils at the same frequency. If the wall is moving slowly, the recoil frequency is only equal in the frame where the wall is stationary. In the frame where the wall is moving away from the light, the light coming in is bluer than the light coming out by twice the Doppler shift factor v/c:
On the other hand, the energy in the light is also decreased when the wall is moving away, because the light is doing work on the wall by radiation pressure. Because the light is reflected, the pressure is equal to twice the momentum carried by light, which is E/c. The rate at which the pressure does work on the wall is found by multiplying by the velocity:
This means that the change in frequency of the light is equal to the work done on the wall by the radiation pressure. The light that is reflected is changed both in frequency and in energy by the same amount:
Since moving the wall slowly should keep a thermal distribution fixed, the probability that the light has energy E at frequency f must only be a function of E/f.
This function cannot be determined from thermodynamic reasoning alone, and Wien guessed at the form that was valid at high frequency. He supposed that the average energy in high-frequency modes was suppressed by a Boltzmann-like factor:
This is not the expected classical energy in the mode, which is by equipartition, but a new and unjustified assumption that fit the high-frequency data.
When the expectation value is added over all modes in a cavity, this is Wien's distribution, and it describes the thermodynamic distribution of energy in a classical gas of photons. Wien's law implicitly assumes that light is statistically composed of packets that change energy and frequency in the same way. The entropy of a Wien gas scales as the volume to the power N, where N is the number of packets. This led Einstein to suggest that light is composed of localizable particles with energy proportional to the frequency. Then the entropy of the Wien gas can be given a statistical interpretation as the number of possible positions that the photons can be in.
Classical mechanics – action variables
Suppose that a Hamiltonian is slowly time-varying, for example, a one-dimensional harmonic oscillator with a changing frequency:
The action J of a classical orbit is the area enclosed by the orbit in phase space:
Since J is an integral over a full period, it is only a function of the energy. When the Hamiltonian is constant in time, and J is constant in time, the canonically conjugate variable increases in time at a steady rate:
So the constant can be used to change time derivatives along the orbit to partial derivatives with respect to at constant J. Differentiating the integral for J with respect to J gives an identity that fixes :
The integrand is the Poisson bracket of x and p. The Poisson bracket of two canonically conjugate quantities, like x and p, is equal to 1 in any canonical coordinate system. So
and is the inverse period. The variable increases by an equal amount in each period for all values of J it is an angle variable.
Adiabatic invariance of J
The Hamiltonian is a function of J only, and in the simple case of the harmonic oscillator,
When H has no time dependence, J is constant. When H is slowly time-varying, the rate of change of J can be computed by re-expressing the integral for J:
The time derivative of this quantity is
Replacing time derivatives with theta derivatives, using and setting without loss of generality ( being a global multiplicative constant in the resulting time derivative of the action) yields
So as long as the coordinates J, do not change appreciably over one period, this expression can be integrated by parts to give zero. This means that for slow variations, there is no lowest-order change in the area enclosed by the orbit. This is the adiabatic invariance theorem the action variables are adiabatic invariants.
For a harmonic oscillator, the area in phase space of an orbit at energy E is the area of the ellipse of constant energy,
The x radius of this ellipse is while the p radius of the ellipse is . Multiplying, the area is . So if a pendulum is slowly drawn in, such that the frequency changes, the energy changes by a proportional amount.
Old quantum theory
After Planck identified that Wien's law can be extended to all frequencies, even very low ones, by interpolating with the classical equipartition law for radiation, physicists wanted to understand the quantum behavior of other systems.
The Planck radiation law quantized the motion of the field oscillators in units of energy proportional to the frequency:
The quantum can only depend on the energy/frequency by adiabatic invariance, and since the energy must be additive when putting boxes end-to-end, the levels must be equally spaced.
Einstein, followed by Debye, extended the domain of quantum mechanics by considering the sound modes in a solid as quantized oscillators. This model explained why the specific heat of solids approached zero at low temperatures, instead of staying fixed at as predicted by classical equipartition.
At the Solvay conference, the question of quantizing other motions was raised, and Lorentz pointed out a problem, known as Rayleigh–Lorentz pendulum. If you consider a quantum pendulum whose string is shortened very slowly, the quantum number of the pendulum cannot change because at no point is there a high enough frequency to cause a transition between the states. But the frequency of the pendulum changes when the string is shorter, so the quantum states change energy.
Einstein responded that for slow pulling, the frequency and energy of the pendulum both change, but the ratio stays fixed. This is analogous to Wien's observation that under slow motion of the wall the energy to frequency ratio of reflected waves is constant. The conclusion was that the quantities to quantize must be adiabatic invariants.
This line of argument was extended by Sommerfeld into a general theory: the quantum number of an arbitrary mechanical system is given by the adiabatic action variable. Since the action variable in the harmonic oscillator is an integer, the general condition is
This condition was the foundation of the old quantum theory, which was able to predict the qualitative behavior of atomic systems. The theory is inexact for small quantum numbers, since it mixes classical and quantum concepts. But it was a useful half-way step to the new quantum theory.
Plasma physics
In plasma physics there are three adiabatic invariants of charged-particle motion.
The first adiabatic invariant, μ
The magnetic moment of a gyrating particle is
which respects special relativity. is the relativistic Lorentz factor, is the rest mass, is the velocity perpendicular to the magnetic field, and is the magnitude of the magnetic field.
is a constant of the motion to all orders in an expansion in , where is the rate of any changes experienced by the particle, e.g., due to collisions or due to temporal or spatial variations in the magnetic field. Consequently, the magnetic moment remains nearly constant even for changes at rates approaching the gyrofrequency. When is constant, the perpendicular particle energy is proportional to , so the particles can be heated by increasing , but this is a "one-shot" deal because the field cannot be increased indefinitely. It finds applications in magnetic mirrors and magnetic bottles.
There are some important situations in which the magnetic moment is not invariant:
Magnetic pumping If the collision frequency is larger than the pump frequency, μ is no longer conserved. In particular, collisions allow net heating by transferring some of the perpendicular energy to parallel energy.
Cyclotron heating If B is oscillated at the cyclotron frequency, the condition for adiabatic invariance is violated, and heating is possible. In particular, the induced electric field rotates in phase with some of the particles and continuously accelerates them.
Magnetic cusps The magnetic field at the center of a cusp vanishes, so the cyclotron frequency is automatically smaller than the rate of any changes. Thus the magnetic moment is not conserved, and particles are scattered relatively easily into the loss cone.
The second adiabatic invariant, J
The longitudinal invariant of a particle trapped in a magnetic mirror,
where the integral is between the two turning points, is also an adiabatic invariant. This guarantees, for example, that a particle in the magnetosphere moving around the Earth always returns to the same line of force. The adiabatic condition is violated in transit-time magnetic pumping, where the length of a magnetic mirror is oscillated at the bounce frequency, resulting in net heating.
The third adiabatic invariant, Φ
The total magnetic flux enclosed by a drift surface is the third adiabatic invariant, associated with the periodic motion of mirror-trapped particles drifting around the axis of the system. Because this drift motion is relatively slow, is often not conserved in practical applications.
References
External links
lecture notes on the second adiabatic invariant
lecture notes on the third adiabatic invariant
Quantum mechanics
Thermodynamics
Plasma theory and modeling | Adiabatic invariant | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,053 | [
"Plasma physics",
"Theoretical physics",
"Quantum mechanics",
"Thermodynamics",
"Plasma theory and modeling",
"Dynamical systems"
] |
1,441,435 | https://en.wikipedia.org/wiki/Equilibrium%20point%20%28mathematics%29 | In mathematics, specifically in differential equations, an equilibrium point is a constant solution to a differential equation.
Formal definition
The point is an equilibrium point for the differential equation
if for all .
Similarly, the point is an equilibrium point (or fixed point) for the difference equation
if for .
Equilibria can be classified by looking at the signs of the eigenvalues of the linearization of the equations about the equilibria. That is to say, by evaluating the Jacobian matrix at each of the equilibrium points of the system, and then finding the resulting eigenvalues, the equilibria can be categorized. Then the behavior of the system in the neighborhood of each equilibrium point can be qualitatively determined, (or even quantitatively determined, in some instances), by finding the eigenvector(s) associated with each eigenvalue.
An equilibrium point is hyperbolic if none of the eigenvalues have zero real part. If all eigenvalues have negative real parts, the point is stable. If at least one has a positive real part, the point is unstable. If at least one eigenvalue has negative real part and at least one has positive real part, the equilibrium is a saddle point and it is unstable. If all the eigenvalues are real and have the same sign the point is called a node.
See also
Autonomous equation
Critical point
Steady state
References
Further reading
Stability theory
Dynamical systems | Equilibrium point (mathematics) | [
"Physics",
"Mathematics"
] | 300 | [
"Stability theory",
"Mechanics",
"Dynamical systems"
] |
1,441,464 | https://en.wikipedia.org/wiki/Vacuum%20solution%20%28general%20relativity%29 | In general relativity, a vacuum solution is a Lorentzian manifold whose Einstein tensor vanishes identically. According to the Einstein field equation, this means that the stress–energy tensor also vanishes identically, so that no matter or non-gravitational fields are present. These are distinct from the electrovacuum solutions, which take into account the electromagnetic field in addition to the gravitational field. Vacuum solutions are also distinct from the lambdavacuum solutions, where the only term in the stress–energy tensor is the cosmological constant term (and thus, the lambdavacuums can be taken as cosmological models).
More generally, a vacuum region in a Lorentzian manifold is a region in which the Einstein tensor vanishes.
Vacuum solutions are a special case of the more general exact solutions in general relativity.
Equivalent conditions
It is a mathematical fact that the Einstein tensor vanishes if and only if the Ricci tensor vanishes. This follows from the fact that these two second rank tensors stand in a kind of dual relationship; they are the trace reverse of each other:
where the traces are .
A third equivalent condition follows from the Ricci decomposition of the Riemann curvature tensor as a sum of the Weyl curvature tensor plus terms built out of the Ricci tensor: the Weyl and Riemann tensors agree, , in some region if and only if it is a vacuum region.
Gravitational energy
Since in a vacuum region, it might seem that according to general relativity, vacuum regions must contain no energy. But the gravitational field can do work, so we must expect the gravitational field itself to possess energy, and it does. However, determining the precise location of this gravitational field energy is technically problematical in general relativity, by its very nature of the clean separation into a universal gravitational interaction and "all the rest".
The fact that the gravitational field itself possesses energy yields a way to understand the nonlinearity of the Einstein field equation: this gravitational field energy itself produces more gravity. (This is described as "the gravity of gravity", or by saying that "gravity gravitates".) This means that the gravitational field outside the Sun is a bit stronger according to general relativity than it is according to Newton's theory.
Examples
Well-known examples of explicit vacuum solutions include:
Minkowski spacetime (which describes empty space with no cosmological constant)
Milne model (which is a model developed by E. A. Milne describing an empty universe which has no curvature)
Schwarzschild vacuum (which describes the spacetime geometry around a spherical mass),
Kerr vacuum (which describes the geometry around a rotating object),
Taub–NUT vacuum (a famous counterexample describing the exterior gravitational field of an isolated object with strange properties),
Kerns–Wild vacuum (Robert M. Kerns and Walter J. Wild 1982) (a Schwarzschild object immersed in an ambient "almost uniform" gravitational field),
double Kerr vacuum (two Kerr objects sharing the same axis of rotation, but held apart by unphysical zero active gravitational mass "cables" going out to suspension points infinitely removed),
Khan–Penrose vacuum (K. A. Khan and Roger Penrose 1971) (a simple colliding plane wave model),
Oszváth–Schücking vacuum (the circularly polarized sinusoidal gravitational wave, another famous counterexample).
Kasner metric (An anisotropic solution, used to study gravitational chaos in three or more dimensions).
These all belong to one or more general families of solutions:
the Weyl vacua (Hermann Weyl) (the family of all static vacuum solutions),
the Beck vacua (Guido Beck 1925) (the family of all cylindrically symmetric nonrotating vacuum solutions),
the Ernst vacua (Frederick J. Ernst 1968) (the family of all stationary axisymmetric vacuum solutions),
the Ehlers vacua (Jürgen Ehlers) (the family of all cylindrically symmetric vacuum solutions),
the Szekeres vacua (George Szekeres) (the family of all colliding gravitational plane wave models),
the Gowdy vacua (Robert H. Gowdy) (cosmological models constructed using gravitational waves),
Several of the families mentioned here, members of which are obtained by solving an appropriate linear or nonlinear, real or complex partial differential equation, turn out to be very closely related, in perhaps surprising ways.
In addition to these, we also have the vacuum pp-wave spacetimes, which include the gravitational plane waves.
See also
Introduction to the mathematics of general relativity
Topological defect
References
Sources
Exact solutions in general relativity | Vacuum solution (general relativity) | [
"Mathematics"
] | 958 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations"
] |
1,441,581 | https://en.wikipedia.org/wiki/Keeling%20Curve | The Keeling Curve is a graph of the annual variation and overall accumulation of carbon dioxide in the Earth's atmosphere based on continuous measurements taken at the Mauna Loa Observatory on the island of Hawaii from 1958 to the present day. The curve is named for the scientist Charles David Keeling, who started the monitoring program and supervised it until his death in 2005.
Keeling's measurements showed the first significant evidence of rapidly increasing carbon dioxide (CO2) levels in the atmosphere. According to Naomi Oreskes, Professor of History of Science at Harvard University, the Keeling curve is one of the most important scientific works of the 20th century. Many scientists credit the Keeling curve with first bringing the world's attention to the current increase of in the atmosphere.
Background
Prior to the 1950s, measurements of atmospheric concentrations had been taken on an ad hoc basis at a variety of locations. In 1938, engineer and amateur meteorologist Guy Stewart Callendar compared datasets of atmospheric from Kew in 1898–1901, which averaged 274 parts per million by volume (ppmv), and from the eastern United States in 1936–1938, which averaged 310 ppmv, and concluded that concentrations were rising due to anthropogenic emissions. However, Callendar's findings were not widely accepted by the scientific community due to the patchy nature of the measurements.
Charles David Keeling, of the Scripps Institution of Oceanography at UC San Diego, was the first person to make frequent regular measurements of atmospheric CO2 concentrations in Antarctica, and on Mauna Loa, Hawaii from March 1958 onwards. Keeling had previously tested and employed measurement techniques at locations including Big Sur near Monterey, rain forests of the Olympic Peninsula in Washington state, and high mountain forests in Arizona. He observed strong diurnal behavior of CO2, with excess CO2 at night due to respiration by plants and soils, and afternoon values representative of the "free atmosphere" over the Northern hemisphere.
Mauna Loa measurements
In 1957–1958, the International Geophysical Year, Keeling obtained funding from the Weather Bureau to install infrared gas analyzers at remote locations, including the South Pole and on the volcano of Mauna Loa on the island of Hawaii. Mauna Loa was chosen as a long-term monitoring site due to its remote location far from continents and its lack of vegetation. Keeling and his collaborators measured the incoming ocean breeze above the thermal inversion layer to minimize local contamination from volcanic vents. The data was normalized to remove any influence from local contamination. Due to funding cuts in the mid-1960s, Keeling was forced to abandon continuous monitoring efforts at the South Pole, but he scraped together enough money to maintain operations at the Mauna Loa Observatory, which have continued to the present day.
Keeling's Tellus article of 1960 presented the first monthly records from Mauna Loa and Antarctica (1957 to 1960), finding a "distinct seasonal cycle...and possibly, a worldwide rise in from year to year." By the 1970s, it was well established that the increase of atmospheric carbon dioxide was ongoing and due to anthropogenic emissions.
Carbon dioxide measurements at the Mauna Loa Observatory in Hawaii are made with a type of infrared spectrophotometer, now known as a nondispersive infrared sensor, that is calibrated using World Meteorological Organization standards. This type of instrument, originally called a capnograph, was first invented by John Tyndall in 1864, and recorded by pen traces on a strip chart recorder. Currently, several laser-based sensors are being added to run concurrently with the infrared spectrophotometer at the Scripps Institute of Oceanography, while NOAA measurements at Mauna Loa still use the nondispersive infrared sensor.
Results and interpretation
The measurements collected at Mauna Loa Observatory show a steady increase in mean atmospheric CO2 concentration from 313 parts per million by volume (ppmv) in March 1958 to 406 ppmv in November 2018, with a current increase of 2.48 ± 0.26 (mean ± 2 std dev) ppmv CO2 per year. This increase in atmospheric CO2 is due to the combustion of fossil fuels, and has been accelerating in recent years. Since CO2 is a greenhouse gas, this has significant implications for global warming. Measurements of CO2 concentration in ancient air bubbles trapped in polar ice cores show that mean atmospheric CO2 concentration was between 275 and 285 ppmv during the Holocene epoch (9,000 BCE onwards), but started rising sharply at the beginning of the nineteenth century.
The Keeling Curve also shows a cyclic variation of about 6 ppmv each year corresponding to the seasonal change in uptake of by the world's land vegetation. Most of this vegetation is in the Northern hemisphere where most of the land is located. From a maximum in May, the level decreases during the northern spring and summer as new plant growth takes out of the atmosphere through photosynthesis. After reaching a minimum in September, the level rises again in the northern fall and winter as plants and leaves die off and decay, releasing back into the atmosphere.
Legacy
Global monitoring
Due in part to the significance of Keeling's findings, NOAA began monitoring CO2 levels worldwide in the 1970s. Today, atmospheric CO2 levels are monitored at about 100 sites around the globe through the Global Greenhouse Gas Reference Network. Measurements at many other isolated sites have confirmed the long-term trend shown by the Keeling Curve, although no sites have as long a record as Mauna Loa.
Ralph Keeling
Since Charles David Keeling's death in 2005, responsibility and oversight of the project was transferred to Keeling's son, Ralph Keeling. On the fiftieth anniversary of the beginning of the project, the younger Keeling wrote an article in Science magazine describing his father's life and work, along with how the project has grown and evolved over time. Along with more precise measurement materials and funds for the project of monitoring of the Earth's levels, Keeling wrote about his pride for his father's work and how he has continued it in his memory.
Recognition
In 2015, the Keeling Curve was designated a National Historic Chemical Landmark by the American Chemical Society. Commemorative plaques were installed at Mauna Loa Observatory and at the Scripps Institution of Oceanography at the University of California, San Diego.
Passing 400 ppm in 2013
On May 9, 2013, the daily mean concentration of in the atmosphere measured at Mauna Loa surpassed 400 parts per million (ppmv). Estimates of during previous geologic eras suggest that has not reached this level since the mid-Pliocene, 2 to 4 million years ago. This level of carbon dioxide, causing climate change, suggests a continued worsening in natural and ecological disasters, which increasingly threatens human and animal habitats on Earth, if greenhouse gas emissions are not significantly reduced.
See also
Greenhouse gas monitoring
Action for Climate Empowerment (ACE)
Paris Agreement
References
External links
Official Keeling Curve website. Scripps Institution of Oceanography, UC San Diego
earth: Annually-updated version of the Keeling curve
Climate Change Is Clear Atop Mauna Loa, NPR, Day to Day, May 1, 2007
Scripps Institution of Oceanography CO2-Program: Home of the Keeling Curve
Atmosphere
Historical climatology
Carbon dioxide
20th-century neologisms | Keeling Curve | [
"Chemistry"
] | 1,510 | [
"Greenhouse gases",
"Carbon dioxide"
] |
1,441,618 | https://en.wikipedia.org/wiki/MIL-STD-1553 | MIL-STD-1553 is a military standard published by the United States Department of Defense that defines the mechanical, electrical, and functional characteristics of a serial data bus. It was originally designed as an avionic data bus for use with military avionics, but has also become commonly used in spacecraft on-board data handling (OBDH) subsystems, both military and civil, including use on the James Webb space telescope. It features multiple (commonly dual) redundant balanced line physical layers, a (differential) network interface, time-division multiplexing, half-duplex command/response protocol, and can handle up to 31 Remote Terminals (devices); 32 is typically designated for broadcast messages. A version of MIL-STD-1553 using optical cabling in place of electrical is known as MIL-STD-1773.
MIL-STD-1553 was first published as a U.S. Air Force standard in 1973, and first was used on the F-16 Falcon fighter aircraft. Other aircraft designs quickly followed, including the F/A-18 Hornet, AH-64 Apache, P-3C Orion, F-15 Eagle and F-20 Tigershark. It is widely used by all branches of the U.S. military and by NASA. Outside of the US it has been adopted by NATO as STANAG 3838 AVS. STANAG 3838, in the form of UK MoD Def-Stan 00-18 Part 2, is used on the Panavia Tornado; BAE Systems Hawk (Mk 100 and later); and extensively, together with STANAG 3910 "EFABus", on the Eurofighter Typhoon. Saab JAS 39 Gripen uses MIL-STD-1553B. The Russian made MiG-35 also uses MIL-STD-1553. MIL-STD-1553 is being replaced on some newer U.S. designs by IEEE 1394 (commonly known as FireWire).
Revisions
MIL-STD-1553B, which superseded the earlier 1975 specification MIL-STD-1553A, was published in 1978. The basic difference between the 1553A and 1553B revisions is that in the latter, the options are defined rather than being left for the user to define as required. It was found that when the standard did not define an item, there was no coordination in its use. Hardware and software had to be redesigned for each new application. The primary goal of the 1553B was to provide flexibility without creating new designs for each new user. This was accomplished by specifying the electrical interfaces explicitly so that electrical compatibility between designs by different manufacturers could be assured.
Six change notices to the standard have been published since 1978. For example, change notice 2 in 1986 changed the title of the document from "Aircraft internal time division command/response multiplex data bus" to "Digital time division command/response multiplex data bus".
MIL-STD-1553C is the last revision made in February 2018. Revision C is functionally equivalent to Revision B but contains updated graphics and tables to ease readability of the standard.
The MIL-STD-1553 standard is maintained by both the U.S. Department of Defense and the Aerospace branch of the Society of Automotive Engineers.
Physical layer
A single bus consists of a wire pair with 70–85 Ω impedance at 1 MHz. Where a circular connector is used, its center pin is used for the high (positive) Manchester bi-phase signal. Transmitters and receivers couple to the bus via isolation transformers, and stub connections branch off using a pair of isolation resistors and, optionally, a coupling transformer. This reduces the impact of a short circuit and ensures that the bus does not conduct current through the aircraft. A Manchester code is used to present both clock and data on the same wire pair and to eliminate any DC component in the signal (which cannot pass the transformers). The bit rate is 1.0 megabit per second (1-bit per μs). The combined accuracy and long-term stability of the bit rate is only specified to be within ±0.1%; the short-term clock stability must be within ±0.01%. The peak-to-peak output voltage of a transmitter is 18–27 V.
The bus can be made dual or triply redundant by using several independent wire pairs, and then all devices are connected to all buses. There is provision to designate a new bus control computer in the event of a failure by the current master controller. Usually, the auxiliary flight control computer(s) monitor the master computer and aircraft sensors via the main data bus. A different version of the bus uses optical fiber, which weighs less and has better resistance to electromagnetic interference, including EMP. This is known as MIL-STD-1773. NASA's "AS 1773" experiment has a dual rate of 1 Mbit/s or 20 Mbit/s – probably a predecessor of STANAG 3910.
Bus protocol
A MIL-STD-1553 multiplex data bus system consists of a Bus Controller (BC) controlling multiple Remote Terminals (RT) all connected together by a data bus providing a single data path between the Bus Controller and all the associated Remote Terminals. There may also be one or more Bus Monitors (BM); however, Bus Monitors are specifically not allowed to take part in data transfers, and are only used to capture or record data for analysis, etc. In redundant bus implementations, several data buses are used to provide more than one data path, i.e. dual redundant data bus, tri-redundant data bus, etc. All transmissions onto the data bus are accessible to the BC and all connected RTs. Messages consist of one or more 16-bit words (command, data, or status). The 16 bits comprising each word are transmitted using Manchester code, where each bit is transmitted as a 0.5 μs high and 0.5 μs low for a logical 1 or a low-high sequence for a logical 0. Each word is preceded by a 3 μs sync pulse (1.5 μs low plus 1.5 μs high for data words and the opposite for command and status words, which cannot occur in the Manchester code) and followed by an odd parity bit. Practically each word could be considered as a 20-bit word: 3-bit for sync, 16-bit for payload and 1-bit for odd parity control. The words within a message are transmitted contiguously and there has to be a minimum of a 4 μs gap between messages. However, this inter-message gap can be, and often is, much larger than 4 μs, even up to 1 ms with some older Bus Controllers. Devices have to start transmitting their response to a valid command within 4–12 μs and are considered to not have received a command or message if no response has started within 14 μs.
All communication on the bus is under the control of the Bus Controller using commands from the BC to the RTs to receive or transmit. The sequence of words, (the form of the notation is <originator>.<word_type(destination)> and is a notation similar to CSP), for transfer of data from the BC to a terminal is
master.command(terminal) → terminal.status(master) → master.data(terminal) → master.command(terminal) → terminal.status(master)
and for terminal to terminal communication is
master.command(terminal_1) → terminal_1.status(master) → master.command(terminal_2) → terminal_2.status(master) → master.command(terminal_1) → terminal_1.data(terminal_2) → master.command(terminal_2) → terminal_2.status(master)
This means that during a transfer, all communication is started by the Bus Controller, and a terminal device cannot start a data transfer on its own. In the case of an RT to RT transfer the sequence is as follows: An application or function in the subsystem behind the RT interface (e.g. RT1) writes the data that is to be transmitted into a specific (transmit) sub-address (data buffer). The time at which this data is written to the sub-address is not necessarily linked to the time of the transaction, though the interfaces ensure that partially updated data is not transmitted. The Bus controller commands the RT that is the destination of the data (e.g. RT2) to receive the data at a specified (receive) data sub-address and then commands RT1 to transmit from the transmit sub-address specified in the command. RT1 transmits a Status word, indicating its current status, and the data. The Bus Controller receives RT1's status word, and sees that the transmit command has been received and actioned without a problem. RT2 receives the data on the shared data bus and writes it into the designated receive sub-address and transmits its Status word. An application or function on the subsystem behind the receiving RT interface may then access the data. Again the timing of this read is not necessarily linked to that of the transfer. The Bus Controller receives RT2's status word and sees that the receive command and data have been received and actioned without a problem.
If, however, either RT fails to send its status or the expected data or indicates a problem through the setting of error bits in the status word, the Bus Controller may retry the transmission. Several options are available for such retries including an immediate retry (on the other data bus of a redundant pair of data buses) and a retry later (on the same bus) in the sequence of transfers.
The sequences ensure that the terminal is functioning and able to receive data. The status word at the end of a data transfer sequence ensures that the data has been received and that the result of the data transfer is acceptable. It is this sequence that gives MIL-STD-1553 its high integrity.
However, the standard does not specify any particular timing for any particular transfer — that's up to the system designers. Generally (the way it is done on most military aircraft), the Bus Controller has a schedule of transfers that covers the majority of transfers, often organized into a major frame or major cycle, which is often subdivided into minor cycles. In such a cyclic executive schedule structure, transfers that occur in every minor cycle (rate group 1) happen at the highest rate, typically 50 Hz, transfers that occur in every other minor cycle, of which there are two groups (rate group 2.1 and 2.2) happen at the next highest rate, e.g. 25 Hz. Similarly, there are four groups (3.1, 3.2, 3.3, and 3.4) at, e.g., 12.5 Hz and so on. Hence, where this scheduling structure is used, the transfers are all at harmonically related frequencies, e.g. 50, 25, 12.5, 6.25, 3.125, and 1.5625 Hz (for a major frame comprising 32 minor cycles at 50 Hz). Whilst RTs cannot start a transfer directly on their own, the standard does include a method for when an RT needs to transmit data that is not automatically scheduled by the Bus Controller. These transfers are often called acyclic transfers as they are outside the structure used by the cyclic executive. In this sequence, an RT requests transmission through a bit in the status word, the Service Request bit. Generally, this causes the Bus Controller to transmit a Transmit Vector Word Mode Code command. However, where an RT only has one possible acyclic transfer, the Bus Controller can skip this part. The vector word is transmitted by the RT as a single 16-bit data word. The format of this vector word is not defined in the standard, so the system designers must specify what values from what RTs mean what action the Bus Controller is to take. This may be to schedule an acyclic transfer either immediately or at the end of the current minor cycle. This means that the Bus Controller has to poll all the Remote Terminals connected to the data bus, generally at least once in a major cycle. RTs with higher-priority functions (for example, those operating the aircraft control surfaces) are polled more frequently. Lower-priority functions are polled less frequently.
Six types of transactions are allowed between the BC and a specific RT or between the Bus Controller and a pair of RTs:
Controller to RT Transfer. The Bus Controller sends one 16-bit receive command word, immediately followed by 1 to 32 16-bit data words. The selected Remote Terminal then sends a single 16-bit Status word.
RT to Controller Transfer. The Bus Controller sends one transmit command word to a Remote Terminal. The Remote Terminal then sends a single Status word, immediately followed by 1 to 32 words.
RT to RT Transfers. The Bus Controller sends out one receive command word immediately followed by one transmit command word. The transmitting Remote Terminal sends a Status word immediately followed by 1 to 32 data words. The receiving Terminal then sends its Status word.
Mode Command Without Data Word. The Bus Controller sends one command word with a Sub-address of 0 or 31 signifying a Mode Code type command. The Remote Terminal responds with a Status word.
Mode Command With Data Word (Transmit). The Bus Controller sends one command word with a Sub-address of 0 or 31 signifying a Mode Code type command. The Remote Terminal responds with a Status word immediately followed by a single Data word.
Mode Command With Data Word (Receive). The Bus Controller sends one command word with a Sub-address of 0 or 31 signifying a Mode Code type command immediately followed by a single data word. The Remote Terminal responds with a Status word.
MIL-STD-1553B also introduced the concept of optional broadcast transfers, in which data is sent to all RTs that implement the option, but to which no RTs respond, as this would cause conflicts on the bus. These can be used where the same data is sent to multiple RTs, to reduce the number of transactions and thus reduce the loading on the data bus. However, the lack of explicit responses by the RTs receiving these broadcasts means that these transfers cannot be automatically re-tried in the event of an error in the transaction.
Four types of broadcast transactions are allowed between the BC and all capable RTs:
Controller to RT(s) Transfer. The Bus Controller sends one receive command word with a Terminal address of 31 signifying a broadcast type command, immediately followed by 1 to 32 data words. All Remote Terminals that implement broadcasts will accept the data but no Remote Terminals will respond.
RT to RT(s) Transfers. The Bus Controller sends out one receive command word with a Terminal address of 31 signifying a broadcast type command, immediately followed by one transmit command. The transmitting Remote Terminal sends a Status word immediately followed by 1 to 32 data words. All Remote Terminals that implement broadcasts will accept the data but no Remote Terminals will respond.
Mode Command Without Data Word (Broadcast). The Bus Controller sends one command word with a Terminal address of 31 signifying a broadcast type command and a sub-address of 0 or 31 signifying a Mode Code type command. No Remote Terminals will respond.
Mode Command With Data Word (Broadcast). The Bus Controller sends one command word with a Terminal address of 31 signifying a broadcast type command and a sub-address of 0 or 31 signifying a Mode Code type command, immediately followed by one Data word. No Remote Terminals will respond.
The Command Word is built as follows. The first 5 bits are the Remote Terminal address (0–31). The sixth bit is 0 for Receive or 1 for Transmit. The next 5 bits indicate the location (sub-address) to hold or get data on the Terminal (1–30). Note that sub-addresses 0 and 31 are reserved for Mode Codes. The last 5 bits indicate the number of words to expect (1–32). All zero bits indicate 32 words. In the case of a Mode Code, these bits indicate the Mode Code number (e.g., Initiate Self Test and Transmit BIT Word).
The Status Word decodes as follows. The first 5 bits are the address of the Remote Terminal that is responding. The rest of the word is single bit condition codes, with some bits reserved. A 'one' state indicates condition is true. More than one condition may be true at the same time.
The image below exemplifies many of the protocol and physical layer concepts explained above. For example, the RT address contained in the Command Word has a value of 0x3 (in range of 0 to 31). The sixth bit is 1, indicating a Transmit from the RT. The sub-address is 0x01. The last 5 bits indicate the number of words to expect, which has a value of 1, which is matched by the single Data Word (value 0x2) after the Status Word.
Also as explained above, devices have to start transmitting their response to a valid command within 4–12 microseconds. In the example, the Response Time is 8.97 μs, therefore within specifications. This means that the Remote Terminal (RT) number 3 has responded to the Bus Controller query after 8.97 μs. The amplitude of the query is lower than the amplitude of the response because the signal is probed at a location closer to the Remote Terminal.
In the Status Word, the first 5 bits are the address of the Remote Terminal that is responding, in this case 0x3. A correct Transfer exhibits the same RT address in the Command Word as in the Status Word.
Conceptual description
Figure 1 shows a sample MIL-STD-1553B system that consists of:
redundant MIL-STD-1553B buses
a Bus Controller
a Backup Bus Controller
a Bus Monitor
a standalone Remote Terminal with one or more subsystems communicating with it
a subsystem with an embedded Remote Terminal
The Bus Controller
There is only one Bus Controller at a time on any MIL-STD-1553 bus. It initiates all message communication over the bus.
Figure 1 shows 1553 data bus details:
operates according to a command list stored in its local memory
commands the various Remote Terminals to send or receive messages
services any requests that it receives from the Remote Terminals
detects and recovers from errors
keeps a history of errors
The 1553B spec dictates that all devices in the system be connected to a redundant pair of buses to provide an alternate data path in the event of damage or failure of the primary bus. Bus messages only travel on one bus at a time, determined by the Bus Controller.
Backup Bus Controller
While there may be only one BC on the bus at any one time, the standard provides a mechanism for handover to a Backup Bus Controller (BBC) or (BUBC), using flags in the status word and Mode Codes. This may be used in normal operation where handover occurs because of some specific function, e.g. handover to or from a BC that is external to the aircraft, but connected to the bus. Procedures for handover in fault and failure conditions generally involve discrete connections between the main and backup BCs, and the backup monitoring the actions of the main BC during operation. For example, if there is a prolonged quiescence on the bus indicating that the active BC has failed, the next highest priority backup BC, indicated by the discrete connections, will take over and begin operating as the active BC.
The Bus Monitor
A Bus Monitor (BM) cannot transmit messages over the data bus. Its primary role is to monitor and record bus transactions, without interfering with the operation of the Bus Controller or the RTs. These recorded bus transactions can then be stored, for later off-line analysis.
Ideally, a BM captures and records all messages sent over the 1553 data bus. However, recording all of the transactions on a busy data bus might be impractical, so a BM is often configured to record a subset of the transactions, based on some criteria provided by the application program.
Alternatively, a BM is used in conjunction with a Backup Bus Controller. This allows the Backup Bus Controller to "hit the ground running", if it is called upon to become the active Bus Controller.
The Remote Terminal
A Remote Terminal can be used to provide:
an interface between the MIL-STD-1553B data bus and an attached subsystem
a bridge between a MIL-STD-1553B bus and another MIL-STD-1553B bus.
For example, in a tracked vehicle, a Remote Terminal might acquire data from an inertial navigational subsystem, and send that data over a 1553 data bus to another Remote Terminal, for display on a crew instrument. Simpler examples of Remote Terminals might be interfaces that switch on the headlights, the landing lights, or the annunciators in an aircraft.
Test Plans for Remote Terminals:
The RT Validation Test Plan is intended for design verification of Remote Terminals designed to meet the requirements of AS 15531 and MIL-STD-1553B with Notice 2. This test plan was initially defined in MIL-HDBK-1553, Appendix A. It was updated in MIL-HDBK-1553A, Section 100. The test plan is maintained by the SAE AS-1A Avionic Networks Subcommittee as AS4111.
The RT Production Test Plan is a simplified subset of the validation test plan and is intended for production testing of Remote Terminals. This test plan is maintained by the SAE AS-1A Avionic Networks Subcommittee as AS4112.
Bus hardware characteristics
The bus hardware encompasses (1) cabling, (2) bus couplers, (3) terminators and (4) connectors.
Cabling
The industry has standardized the cable type as a twinax cable with a characteristic impedance of 78 ohms, which is almost the midpoint of the specification range of 70 to 85 ohms.
MIL-STD-1553B does not specify the length of the cable. However, the maximum length of cable is directly related to the gauge of the cable conductor and time delay of the transmitted signal. A smaller conductor attenuates the signal more than a larger conductor. Typical propagation delay for a 1553B cable is 1.6 nanoseconds per foot. Thus, the end-to-end would have a 160 nanosecond propagation delay, which is equal to the average rise time of a 1553B signal. According to MIL-HDBK-1553A, when a signal's propagation delay time is more than 50% of the rise or fall time, it is necessary to consider transmission line effects. This delay time is proportional to the distance propagated. Also, consideration must be given to the actual distance between the transmitter and receiver, and the individual waveform characteristics of the transmitters and receivers.
MIL-STD-1553B specifies that the longest stub length is for transformer coupled stubs, but can be exceeded. With no stubs attached, the main bus looks like an infinite length transmission line with no disturbing reflections. When a stub is added, the bus is loaded and a mismatch occurs with resulting reflections. The degree of mismatch and signal distortion due to reflections are a function of the impedance presented by the stub and terminal input impedance. To minimize signal distortion, it is desirable that the stub maintain high impedance. This impedance is reflected back to the bus. At the same time, however, the impedance must be kept low so that adequate signal power will be delivered to the receiving end. Therefore, a tradeoff between these conflicting requirements is necessary to achieve the specified signal-to-noise ratio and system error rate performance (for more information, refer to MIL-HDBK-1553A).
Stubbing
Each terminal, RT, BC, or BM, is connected to the bus through a stub, formed of a length of cable of the same type as the bus itself. MIL-STD-1553B defines two ways of coupling these stubs to the bus: transformer coupled stubs and direct coupled stubs. Transformer coupled stubs are preferred for their fault tolerance and better matching to the impedance of the bus, and consequent reduction in reflections, etc. The appendix to MIL-STD-1553B (in section 10.5, Stubbing) states "The preferred method of stubbing is to use transformer coupled stubs… This method provides the benefits of DC isolation, increased common mode rejection, a doubling of effective stub impedance, and fault isolation for the entire stub and terminal. Direct coupled stubs… should be avoided if at all possible. Direct coupled stubs provide no DC isolation or common mode rejection for the terminal external to its subsystem. Further, any shorting fault between the subsystems [sic] internal isolation resistors (usually on a circuit board) and the main bus junction will cause failure of that entire bus. It can be expected that when the direct coupled stub length exceeds ], that it will begin to distort the main bus waveforms."
The use of transformer coupled stubs also provides improved protection for 1553 terminals against lightning strikes. Isolation is even more critical in new composite aircraft where the skin of the aircraft no longer provides an inherent Faraday shield as was the case with aluminum skinned aircraft.
In a transformer coupled stub, the length of the stub cable should not exceed , but this may be exceeded "if installation requirements dictate." The coupling transformer has to have a turns ratio of 1:1.41 ± 3.0 percent. The resistors R both have to have a value of 0.75 Zo ± 2.0 percent, where Zo is the characteristic impedance of the bus at 1 MHz.
In a direct coupled stub, the length of stub cable should not exceed 1-foot, but again this may be exceeded if installation requirements dictate. The isolation resistors R have to have a fixed value of 55 ohms ± 2.0 percent.
Bus couplers
Stubs for RTs, the BC, or BMs, are generally connected to the bus through coupling boxes, which may provide a single or multiple stub connections. These provide the required shielding (≥ 75 percent) and, for transformer coupled stubs, contain the coupling transformers and isolation resistors. They have two external connectors through which the bus feeds, and one or more external connectors to which the stub or stubs connect. These stub connectors should not be terminated with matching resistors, but left open circuit when not used, with blanking caps where necessary. One of the bus connectors may be terminated where the bus coupler is physically at the end of the bus cable, i.e. it is not normally considered essential to have a length of bus cable between the last bus coupler and the termination resistor.
Cable termination
Both ends of the bus, whether it includes one coupler or a series of couplers connected together, must be terminated (in accordance with MIL-STD-1553B) with "a resistance, equal to the selected cable nominal characteristic impedance (Zo) ± 2.0 percent." This is typically 78 ohms. The purpose of electrical termination is to minimize the effects of signal reflections that can cause waveform distortion. If terminations are not used, the communications signal can be compromised causing disruption or intermittent communications failures.
Connectors
The standard does not specify the connector types or how they should be wired, other than shielding requirements, etc. In lab environments concentric twinax bayonet style connectors are commonly used. These connectors are available in standard (BNC size), miniature and sub-miniature sizes. In military aircraft implementations, MIL-DTL-5015 and MIL-DTL-38999 circular connectors are generally used.
Evolution
STANAG 3910 (EFABus) mates a 1553 or 1773 link with additional high-speed 20 Mbps buses, either optical or electrical. In the STANAG form, the 1553/1773 low-speed link serves as the control channel for the high speed link. In the EFABus Express (EfEx) form, the high-speed link acts as its own control channel. Either way, high and low-speed buses share the same addressing model and can communicate with each other.
STANAG 7221 (E1553) expands a 1553 link with the capability to carry a 100 Mbps signal on the same wire without interfering with old signaling. The concept is similar to how ADSL avoids voice frequencies, but done at higher bandwidths. In addition to 1553B, it also runs over coax, twisted pair, Power-Line Carrier, and existing ARINC 429 links.
Similar systems
DIGIBUS (or Digibus, GAM-T-101) is the French counterpart to MIL-STD-1553. It is similar to MIL-STD-1553 in the same notion of Bus Controller, Remote Terminal, monitor, same transmission speed, but the difference is that DIGIBUS uses separate links for data and commands.
GOST 26765.52-87 and its descendant GOST R 52070-2003 are the Soviet and Russian, respectively, equivalents of MIL-STD-1553B. The encoding, data rate, word structure, and control commands are fully identical.
GJV289A is the Chinese equivalent of MIL-STD-1553. Aircraft utilizing this system can reportedly use both Soviet (GOST bus) and western (MIL-STD-1553 bus) weapons.
H009 (also called MacAir H009), introduced by McDonnell in 1967, was one of the first avionics data buses. It is a dual redundant bus controlled by a Central Control Complex (CCC), with up to 16 Peripheral Units (PUs), synchronously communicating using a 1 MHz clock. H009 was used in early F-15 fighter jets, but due its noise sensitivity and other reliability issues was replaced by MIL-STD-1553.
Development tools
When developing or troubleshooting for MIL-STD-1553, examination of the electronic signals is useful. A logic analyzer with protocol-decoding capability, also a bus analyzer or protocol analyzer, are useful tools to collect, analyze, decode and store the waveforms of the high-speed electronic signals.
Hardware
Intel M82553 Protocol Management Unit (PMU) using the CHMOS III technology. This device meets full bus interface protocol standard.
See also
MIL-STD-1760
MIL-STD-704
Aircraft flight control systems
Fly-by-wire
Avionics Full-Duplex Switched Ethernet (AFDX) – a faster Ethernet-based technology
ARINC 429 Commercial Avionics Counterpart
Bus coupler – A brief description of bus coupler
TTEthernet – Time-Triggered Ethernet (SAE AS6802)
SpaceWire
Sources
MIL-STD-1553C: Digital Time Division Command/Response Multiplex Data Bus. United States Department of Defense, February 2018.
SAE AS15531: Digital Time Division Command/Response Multiplex Data Bus.
SAE AS15532: Data Word and Message Formats.
SAE AS4111: RT Validation Test Plan.
SAE AS4112: RT Production Test Plan.
References
External links
MIL-STD-1553, Digital Time Division Command/Response Multiplex Data Bus. United States Department of Defense, February 2018.
MIL-STD-1773, Fiber Optics Mechanization of an Aircraft Internal Time Division Command/Response Multiplex Data Bus. United States Department of Defense, October 1989.
MIL-STD-1553 Tutorial from AIM, Avionics Databus Solutions, Interface Boards for MIL-STD-1553/1760
MIL-STD-1553 Tutorial from Avionics Interface Technologies (registration required)
MIL-STD-1553 Tutorial (video) from Excalibur Systems Inc.
MIL-STD-1553 Couplers Tutorial (video) from Excalibur Systems Inc.
MIL-STD-1553 Tutorial by GE Intelligent Platforms (registration required)
MIL-STD-1553 Tutorial and References from Ballard Technology (includes MIL-STD-1553B & MIL-HDBK-1553A Notice2)
MIL-STD-1553 Designer's Guide from Data Device Corporation
MIL-STD-1553 Tutorial and Reference from Alta Data Technologies
INTRODUCTION TO THE MIL-STD-1553B SERIAL MULTIPLEX DATA BUS by D. R. Bracknell, Royal Aircraft Establishment, Farnbourogh, 1988.
Introduction to MIL-STD-1553 Short Course from Georgia Tech Professional Education
MIL-STD-1553 Complete online reference from Data Device Corporation
Military Computer with MIL-STD-1553 Interface from AMDTEC Defence
Avionics
Military of the United States standards
Serial buses
Aviation standards | MIL-STD-1553 | [
"Technology"
] | 6,842 | [
"Avionics",
"Aircraft instruments"
] |
1,442,200 | https://en.wikipedia.org/wiki/Medicines%20and%20Healthcare%20products%20Regulatory%20Agency | The Medicines and Healthcare products Regulatory Agency (MHRA) is an executive agency of the Department of Health and Social Care in the United Kingdom which is responsible for ensuring that medicines and medical devices work and are acceptably safe.
The MHRA was formed in 2003 with the merger of the Medicines Control Agency (MCA) and the Medical Devices Agency (MDA). In April 2013, it merged with the National Institute for Biological Standards and Control (NIBSC) and was rebranded, with the MHRA identity being used solely for the regulatory centre within the group. The agency employs more than 1,200 people in London, York and South Mimms, Hertfordshire.
Structure
The MHRA is divided into three main centres:
MHRA Regulatory – the regulator for the pharmaceutical and medical devices industries
Clinical Practice Research Datalink – licences anonymised health care data to pharmaceutical companies, academics and other regulators for research
National Institute for Biological Standards and Control – responsible for the standardisation and control of biological medicines
The MHRA has several independent advisory committees which provide the UK Government with information and guidance on the regulation of medicines and medical devices. There are currently eight such committees:
Advisory Board on the Registration of Homeopathic Products
Herbal Medicines Advisory Committee
The Review Panel
Independent Scientific Advisory Committee for MHRA database research
Medicines Industry Liaison Group
Innovation Office
Blood Consultative Committee
Devices Expert Advisory Committee
History
In 1999, the Medicines Control Agency (MCA) took over control of the General Practice Research Database (GPRD) from the Office for National Statistics. The Medicines Control Agency (MCA) and the Medical Devices Agency (MDA) merged in 2003 to form MHRA. In April 2012, the GPRD was rebranded as the Clinical Practice Research Datalink (CPRD). In April 2013, MHRA merged with the National Institute for Biological Standards and Control (NIBSC) and was rebranded, with the MHRA identity being used for the parent organisation and one of the centres within the group. At the same time, CPRD was made a separate centre of the MHRA.
Roles
Operate post-marketing surveillance – in particular the Yellow Card Scheme – for reporting, investigating and monitoring of adverse drug reactions to medicines and incidents with medical devices.
Assess and authorise of medicinal products for sale and supply in the UK.
Oversee the Notified Bodies that ensure medical device manufacturers comply with regulatory requirements before putting devices on the market.
Operate a quality surveillance system to sample and test medicines to address quality defects and to monitor the safety and quality of unlicensed products.
Investigate internet sales and potential counterfeiting of medicines, and prosecute where necessary.
Regulate clinical trials of medicines and medical devices.
Monitor and ensure compliance with statutory obligations relating to medicines and medical devices.
Promote safe use of medicines and devices.
Manage the Clinical Practice Research Datalink and the British Pharmacopoeia.
The MHRA hosts and supports a number of expert advisory bodies, including the British Pharmacopoeia Commission, and the Commission on Human Medicine which replaced the Committee on the Safety of Medicines in 2005.
The MHRA manages the Early Access to Medicines Scheme (EAMS), which was created in 2014 to allow access to medicines prior to market authorisation where there is a clear unmet medical need.
European Union
Prior to the UK's departure from the European Union in January 2021, the MHRA was part of the European system of approval. Under this system, national bodies can be the rapporteur or co-rapporteur for any given pharmaceutical application, taking on the bulk of the verification work on behalf of all members, while the documents are still sent to other members as and where requested.
From January 2021, the MHRA is instead a stand-alone body, although under the Northern Ireland Protocol the authorisation of medicines marketed in Northern Ireland continued to be the responsibility of the European Medicines Agency. However, as a result of the 2023 Windsor Framework, the MHRA is expected to once again deal with authorisation throughout the United Kingdom.
Funding
The MHRA is funded by the Department of Health and Social Care for the regulation of medical devices, whilst the costs of medicines regulation is met through fees from the pharmaceutical industry. This has led to suggestions by some MPs that the MHRA is too reliant on industry, and so not fully independent.
In 2017, the MHRA was awarded over £980,000 by the Bill & Melinda Gates Foundation to fund its work with the foundation and the World Health Organization on improving safety monitoring for new medicines in low and middle-income countries. In response to a Freedom of Information request, in 2022 the MHRA stated that approximately £3million had been received from the Gates Foundation for a number of initiatives spanning several financial years.
Key people
June Raine has been the chief executive of the MHRA since 2019, succeeding Ian Hudson who had held the post since 2013.
The MHRA's strategy is set by a board which consists of a chairperson (appointed for a three-year term by the Secretary of State for the Department of Health and Social Care) and eight non-executive directors, together with the chief executive and chief operating officer. The current co-chairs are Amanda Calvert, Graham Cooke and Michael Whitehouse.
Past chairs
2003 to December 2012 – Sir Alasdair Breckenridge
January 2013 to 2014 – Gordon Duff
December 2014 to 2020 – Michael Rawlins (also chaired UK Biobank; previously chair of the National Institute for Health and Care Excellence)
September 2020 to July 2023 – Stephen Lightfoot (also chaired Sussex Community NHS Foundation Trust)
Notable interventions
Covid-19
On vaccines
On 2 December 2020, the MHRA became the first global medicines regulator to approve an RNA vaccine when it gave conditional and temporary authorization to supply for use of the Pfizer–BioNTech COVID-19 vaccine codenamed BNT162b2 (later branded as Comirnaty). This approval enabled the start of the UK's COVID-19 vaccination programme. The regulator's public assessment report for the vaccine was published in 15 December.
The MHRA went on to give conditional and temporary authorization to supply of further vaccines: AZD1222 from Oxford University and AstraZeneca on 30 December, mRNA-1273 from Moderna on 8 January 2021, and a single-dose vaccine from Janssen on 28 May 2021. The approval of the Pfizer-BioNTech vaccine was extended to young people aged 12–15 in June 2021, 5–11 in December 2021, and from six months in December 2022.
The status of the Oxford / AstraZeneca vaccine was upgraded to conditional marketing authorisation on 24 June 2021. The MHRA confirmed in September 2021 that supplementary "booster" doses of these vaccines would be safe and effective, but stated that the Joint Committee on Vaccination and Immunisation had the task of advising if and when they should be used in this way. Later that month, the MHRA said the Moderna vaccine could also be given as a booster dose.
In August and September 2022, the MHRA approved the first bivalent COVID-19 booster vaccines.
On tests
In January 2021, the MHRA expressed concern to the UK government over plans to deploy lateral flow tests in schools in England, stating that they had not authorised daily use of the tests due to concerns that negative results may give false reassurance. The government suspended the scheme the following week, citing risks arising from high prevalence of the virus and higher rates of transmission of a new variant.
Cough syrup containing codeine
In July 2023, MHRA began a consultation to reclassify cough syrups containing codeine (an opiate) as prescription-only medicines, in response to a rise in recreational drug abuse cases since 2018. There were 277 serious and fatal reactions to medicines containing codeine in 2021, and 243 in 2022.
Criticism
In 2005, the MHRA was criticised by the House of Commons Health Committee for, among other things, lacking transparency, and for inadequately checking drug licensing data.
The MHRA and the US Food and Drug Administration were criticised in the 2012 book Bad Pharma, and in 2004 by David Healy in evidence to the House of Commons Health Committee, for having undergone regulatory capture, i.e. advancing the interests of the drug companies rather than the interests of the public.
The Cumberlege Report, also known as the Independent Medicines and Medical Devices Safety Review, is a comprehensive report commissioned by the UK government to investigate the harm caused by certain medical treatments and devices. Released in 2020, the report highlighted the suffering of thousands of patients who experienced complications from treatments such as pelvic mesh implants, sodium valproate, and Primodos. It criticized MHRA's failure to adequately respond to these issues, calling for improved patient safety measures, better regulation of medical devices, and increased support for those affected.
The COVID Response & Recovery APPG wrote to Stephen Brine, chairperson of the Health Select Committee, in October 2023 raising concerns about serious failures by MHRA and demanding an urgent investigation.
See also
Black triangle scheme
List of pharmacy organisations in the United Kingdom
European Medicines Agency
Food and Drug Administration (United States)
Regulation of therapeutic goods
References
External links
National Institute for Biological Standards and Control (NIBSC) website
Clinical Practice Research Datalink (CPRD) website
2003 establishments in the United Kingdom
Canary Wharf
Department of Health and Social Care
Executive agencies of the United Kingdom government
Health in the London Borough of Tower Hamlets
Pharmacy organisations in the United Kingdom
Medical regulation in the United Kingdom
National agencies for drug regulation
Organisations based in the London Borough of Tower Hamlets
Organizations established in 2003
Regulators of biotechnology products
Regulators of the United Kingdom
Regulation of medical devices | Medicines and Healthcare products Regulatory Agency | [
"Chemistry",
"Biology"
] | 1,962 | [
"Biotechnology products",
"Regulation of biotechnologies",
"National agencies for drug regulation",
"Regulators of biotechnology products",
"Drug safety"
] |
1,442,470 | https://en.wikipedia.org/wiki/Transpose%20of%20a%20linear%20map | In linear algebra, the transpose of a linear map between two vector spaces, defined over the same field, is an induced map between the dual spaces of the two vector spaces.
The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors.
Definition
Let denote the algebraic dual space of a vector space
Let and be vector spaces over the same field
If is a linear map, then its algebraic adjoint or dual, is the map defined by
The resulting functional is called the pullback of by
The continuous dual space of a topological vector space (TVS) is denoted by
If and are TVSs then a linear map is weakly continuous if and only if in which case we let denote the restriction of to
The map is called the transpose or algebraic adjoint of
The following identity characterizes the transpose of :
where is the natural pairing defined by
Properties
The assignment produces an injective linear map between the space of linear operators from to and the space of linear operators from to
If then the space of linear maps is an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over to itself.
One can identify with using the natural injection into the double dual.
If and are linear maps then
If is a (surjective) vector space isomorphism then so is the transpose
If and are normed spaces then
and if the linear operator is bounded then the operator norm of is equal to the norm of ; that is
and moreover,
Polars
Suppose now that is a weakly continuous linear operator between topological vector spaces and with continuous dual spaces and respectively.
Let denote the canonical dual system, defined by where and are said to be if
For any subsets and let
denote the () (resp. ).
If and are convex, weakly closed sets containing the origin then implies
If and then
and
If and are locally convex then
Annihilators
Suppose and are topological vector spaces and is a weakly continuous linear operator (so ). Given subsets and define their (with respect to the canonical dual system) by
and
The kernel of is the subspace of orthogonal to the image of :
The linear map is injective if and only if its image is a weakly dense subset of (that is, the image of is dense in when is given the weak topology induced by ).
The transpose is continuous when both and are endowed with the weak-* topology (resp. both endowed with the strong dual topology, both endowed with the topology of uniform convergence on compact convex subsets, both endowed with the topology of uniform convergence on compact subsets).
(Surjection of Fréchet spaces): If and are Fréchet spaces then the continuous linear operator is surjective if and only if (1) the transpose is injective, and (2) the image of the transpose of is a weakly closed (i.e. weak-* closed) subset of
Duals of quotient spaces
Let be a closed vector subspace of a Hausdorff locally convex space and denote the canonical quotient map by
Assume is endowed with the quotient topology induced by the quotient map
Then the transpose of the quotient map is valued in and
is a TVS-isomorphism onto
If is a Banach space then is also an isometry.
Using this transpose, every continuous linear functional on the quotient space is canonically identified with a continuous linear functional in the annihilator of
Duals of vector subspaces
Let be a closed vector subspace of a Hausdorff locally convex space
If and if is a continuous linear extension of to then the assignment induces a vector space isomorphism
which is an isometry if is a Banach space.
Denote the inclusion map by
The transpose of the inclusion map is
whose kernel is the annihilator and which is surjective by the Hahn–Banach theorem. This map induces an isomorphism of vector spaces
Representation as a matrix
If the linear map is represented by the matrix with respect to two bases of and then is represented by the transpose matrix with respect to the dual bases of and hence the name.
Alternatively, as is represented by acting to the right on column vectors, is represented by the same matrix acting to the left on row vectors.
These points of view are related by the canonical inner product on which identifies the space of column vectors with the dual space of row vectors.
Relation to the Hermitian adjoint
The identity that characterizes the transpose, that is, is formally similar to the definition of the Hermitian adjoint, however, the transpose and the Hermitian adjoint are not the same map.
The transpose is a map and is defined for linear maps between any vector spaces and without requiring any additional structure.
The Hermitian adjoint maps and is only defined for linear maps between Hilbert spaces, as it is defined in terms of the inner product on the Hilbert space.
The Hermitian adjoint therefore requires more mathematical structure than the transpose.
However, the transpose is often used in contexts where the vector spaces are both equipped with a nondegenerate bilinear form such as the Euclidean dot product or another inner product.
In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map
For a complex Hilbert space, the inner product is sesquilinear and not bilinear, and these conversions change the transpose into the adjoint map.
More precisely: if and are Hilbert spaces and is a linear map then the transpose of and the Hermitian adjoint of which we will denote respectively by and are related.
Denote by and the canonical antilinear isometries of the Hilbert spaces and onto their duals.
Then is the following composition of maps:
Applications to functional analysis
Suppose that and are topological vector spaces and that is a linear map, then many of 's properties are reflected in
If and are weakly closed, convex sets containing the origin, then implies
The null space of is the subspace of orthogonal to the range of
is injective if and only if the range of is weakly closed.
See also
References
Bibliography
Functional analysis
Linear algebra
Linear functionals | Transpose of a linear map | [
"Mathematics"
] | 1,341 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Mathematical relations",
"Linear algebra",
"Algebra"
] |
1,442,505 | https://en.wikipedia.org/wiki/Complex%20conjugate%20of%20a%20vector%20space | In mathematics, the complex conjugate of a complex vector space is a complex vector space that has the same elements and additive group structure as but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of satisfies
where is the scalar multiplication of and is the scalar multiplication of
The letter stands for a vector in is a complex number, and denotes the complex conjugate of
More concretely, the complex conjugate vector space is the same underlying vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure (different multiplication by ).
Motivation
If and are complex vector spaces, a function is antilinear if
With the use of the conjugate vector space , an antilinear map can be regarded as an ordinary linear map of type The linearity is checked by noting:
Conversely, any linear map defined on gives rise to an antilinear map on
This is the same underlying principle as in defining the opposite ring so that a right -module can be regarded as a left -module, or that of an opposite category so that a contravariant functor can be regarded as an ordinary functor of type
Complex conjugation functor
A linear map gives rise to a corresponding linear map that has the same action as Note that preserves scalar multiplication because
Thus, complex conjugation and define a functor from the category of complex vector spaces to itself.
If and are finite-dimensional and the map is described by the complex matrix with respect to the bases of and of then the map is described by the complex conjugate of with respect to the bases of and of
Structure of the conjugate
The vector spaces and have the same dimension over the complex numbers and are therefore isomorphic as complex vector spaces. However, there is no natural isomorphism from to
The double conjugate is identical to
Complex conjugate of a Hilbert space
Given a Hilbert space (either finite or infinite dimensional), its complex conjugate is the same vector space as its continuous dual space
There is one-to-one antilinear correspondence between continuous linear functionals and vectors.
In other words, any continuous linear functional on is an inner multiplication to some fixed vector, and vice versa.
Thus, the complex conjugate to a vector particularly in finite dimension case, may be denoted as (v-dagger, a row vector that is the conjugate transpose to a column vector ).
In quantum mechanics, the conjugate to a ket vector is denoted as – a bra vector (see bra–ket notation).
See also
conjugate bundle
References
Further reading
Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. . (complex conjugate vector spaces are discussed in section 3.3, pag. 26).
Linear algebra
Vector space | Complex conjugate of a vector space | [
"Mathematics"
] | 600 | [
"Linear algebra",
"Algebra"
] |
1,442,624 | https://en.wikipedia.org/wiki/Sequence%20homology | Sequence homology is the biological homology between DNA, RNA, or protein sequences, defined in terms of shared ancestry in the evolutionary history of life. Two segments of DNA can have shared ancestry because of three phenomena: either a speciation event (orthologs), or a duplication event (paralogs), or else a horizontal (or lateral) gene transfer event (xenologs).
Homology among DNA, RNA, or proteins is typically inferred from their nucleotide or amino acid sequence similarity. Significant similarity is strong evidence that two sequences are related by evolutionary changes from a common ancestral sequence. Alignments of multiple sequences are used to indicate which regions of each sequence are homologous.
Identity, similarity, and conservation
The term "percent homology" is often used to mean "sequence similarity”, that is the percentage of identical residues (percent identity), or the percentage of residues conserved with similar physicochemical properties (percent similarity), e.g. leucine and isoleucine, is usually used to "quantify the homology." Based on the definition of homology specified above this terminology is incorrect since sequence similarity is the observation, homology is the conclusion. Sequences are either homologous or not. This involves that the term "percent homology" is a misnomer.
As with morphological and anatomical structures, sequence similarity might occur because of convergent evolution, or, as with shorter sequences, by chance, meaning that they are not homologous. Homologous sequence regions are also called conserved. This is not to be confused with conservation in amino acid sequences, where the amino acid at a specific position has been substituted with a different one that has functionally equivalent physicochemical properties.
Partial homology can occur where a segment of the compared sequences has a shared origin, while the rest does not. Such partial homology may result from a gene fusion event.
Orthology
Homologous sequences are orthologous if they are inferred to be descended from the same ancestral sequence separated by a speciation event: when a species diverges into two separate species, the copies of a single gene in the two resulting species are said to be orthologous. Orthologs, or orthologous genes, are genes in different species that originated by vertical descent from a single gene of the last common ancestor. The term "ortholog" was coined in 1970 by the molecular evolutionist Walter Fitch.
For instance, the plant Flu regulatory protein is present both in Arabidopsis (multicellular higher plant) and Chlamydomonas (single cell green algae). The Chlamydomonas version is more complex: it crosses the membrane twice rather than once, contains additional domains and undergoes alternative splicing. However, it can fully substitute the much simpler Arabidopsis protein, if transferred from algae to plant genome by means of genetic engineering. Significant sequence similarity and shared functional domains indicate that these two genes are orthologous genes, inherited from the shared ancestor.
Orthology is strictly defined in terms of ancestry. Given that the exact ancestry of genes in different organisms is difficult to ascertain due to gene duplication and genome rearrangement events, the strongest evidence that two similar genes are orthologous is usually found by carrying out phylogenetic analysis of the gene lineage. Orthologs often, but not always, have the same function.
Orthologous sequences provide useful information in taxonomic classification and phylogenetic studies of organisms. The pattern of genetic divergence can be used to trace the relatedness of organisms. Two organisms that are very closely related are likely to display very similar DNA sequences between two orthologs. Conversely, an organism that is further removed evolutionarily from another organism is likely to display a greater divergence in the sequence of the orthologs being studied.
Databases of orthologous genes and de novo orthology inference tools
Given their tremendous importance for biology and bioinformatics, orthologous genes have been organized in several specialized databases that provide tools to identify and analyze orthologous gene sequences. These resources employ approaches that can be generally classified into those that use heuristic analysis of all pairwise sequence comparisons, and those that use phylogenetic methods. Sequence comparison methods were first pioneered in the COGs database in 1997. These methods have been extended and automated in twelve different databases the most advanced being AYbRAH Analyzing Yeasts by Reconstructing Ancestry of Homologs as well as these following databases right now. Some tools predict orthologous de novo from the input protein sequences, might not provide any Database. Among these tools are SonicParanoid and OrthoFinder.
eggNOG
GreenPhylDB for plants
InParanoid focuses on pairwise ortholog relationships
OHNOLOGS is a repository of the genes retained from whole genome duplications in the vertebrate genomes including human and mouse.
OMA
OrthoDB appreciates that the orthology concept is relative to different speciation points by providing a hierarchy of orthologs along the species tree.
OrthoInspector is a repository of orthologous genes for 4753 organisms covering the three domains of life
OrthologID
OrthoMaM for mammals
OrthoMCL
Roundup
SonicParanoid is a graph based method that uses machine learning to reduce execution times and infer orthologs at the domain level.
Tree-based phylogenetic approaches aim to distinguish speciation from gene duplication events by comparing gene trees with species trees, as implemented in databases and software tools such as:
LOFT
TreeFam
OrthoFinder
A third category of hybrid approaches uses both heuristic and phylogenetic methods to construct clusters and determine trees, for example:
EnsemblCompara GeneTrees
HomoloGene
Ortholuge
Paralogy
Paralogous genes are genes that are related via duplication events in the last common ancestor (LCA) of the species being compared. They result from the mutation of duplicated genes during separate speciation events. When descendants from the LCA share mutated homologs of the original duplicated genes then those genes are considered paralogs.
As an example, in the LCA, one gene (gene A) may get duplicated to make a separate similar gene (gene B), those two genes will continue to get passed to subsequent generations. During speciation, one environment will favor a mutation in gene A (gene A1), producing a new species with genes A1 and B. Then in a separate speciation event, one environment will favor a mutation in gene B (gene B1) giving rise to a new species with genes A and B1. The descendants' genes A1 and B1 are paralogous to each other because they are homologs that are related via a duplication event in the last common ancestor of the two species.
Additional classifications of paralogs include alloparalogs (out-paralogs) and symparalogs (in-paralogs). Alloparalogs are paralogs that evolved from gene duplications that preceded the given speciation event. In other words, alloparalogs are paralogs that evolved from duplication events that happened in the LCA of the organisms being compared. The example above is an example alloparalogy. Symparalogs are paralogs that evolved from gene duplication of paralogous genes in subsequent speciation events. From the example above, if the descendant with genes A1 and B underwent another speciation event where gene A1 duplicated, the new species would have genes B, A1a, and A1b. In this example, genes A1a and A1b are symparalogs.
Paralogous genes can shape the structure of whole genomes and thus explain genome evolution to a large extent. Examples include the Homeobox (Hox) genes in animals. These genes not only underwent gene duplications within chromosomes but also whole genome duplications. As a result, Hox genes in most vertebrates are clustered across multiple chromosomes with the HoxA-D clusters being the best studied.
Another example are the globin genes which encode myoglobin and hemoglobin and are considered to be ancient paralogs. Similarly, the four known classes of hemoglobins (hemoglobin A, hemoglobin A2, hemoglobin B, and hemoglobin F) are paralogs of each other. While each of these proteins serves the same basic function of oxygen transport, they have already diverged slightly in function: fetal hemoglobin (hemoglobin F) has a higher affinity for oxygen than adult hemoglobin. Function is not always conserved, however. Human angiogenin diverged from ribonuclease, for example, and while the two paralogs remain similar in tertiary structure, their functions within the cell are now quite different.
It is often asserted that orthologs are more functionally similar than paralogs of similar divergence, but several papers have challenged this notion.
Regulation
Paralogs are often regulated differently, e.g. by having different tissue-specific expression patterns (see Hox genes). However, they can also be regulated differently on the protein level. For instance, Bacillus subtilis encodes two paralogues of glutamate dehydrogenase: GudB is constitutively transcribed whereas RocG is tightly regulated. In their active, oligomeric states, both enzymes show similar enzymatic rates. However, swaps of enzymes and promoters cause severe fitness losses, thus indicating promoter–enzyme coevolution. Characterization of the proteins shows that, compared to RocG, GudB's enzymatic activity is highly dependent on glutamate and pH.
Paralogous chromosomal regions
Sometimes, large regions of chromosomes share gene content similar to other chromosomal regions within the same genome. They are well characterised in the human genome, where they have been used as evidence to support the 2R hypothesis. Sets of duplicated, triplicated and quadruplicated genes, with the related genes on different chromosomes, are deduced to be remnants from genome or chromosomal duplications. A set of paralogy regions is together called a paralogon. Well-studied sets of paralogy regions include regions of human chromosome 2, 7, 12 and 17 containing Hox gene clusters, collagen genes, keratin genes and other duplicated genes, regions of human chromosomes 4, 5, 8 and 10 containing neuropeptide receptor genes, NK class homeobox genes and many more gene families, and parts of human chromosomes 13, 4, 5 and X containing the ParaHox genes and their neighbors. The Major histocompatibility complex (MHC) on human chromosome 6 has paralogy regions on chromosomes 1, 9 and 19. Much of the human genome seems to be assignable to paralogy regions.
Ohnology
Ohnologous genes are paralogous genes that have originated by a process of whole-genome duplication. The name was first given in honour of Susumu Ohno by Ken Wolfe. Ohnologues are useful for evolutionary analysis because all ohnologues in a genome have been diverging for the same length of time (since their common origin in the whole genome duplication). Ohnologues are also known to show greater association with cancers, dominant genetic disorders, and pathogenic copy number variations.
Xenology
Homologs resulting from horizontal gene transfer between two organisms are termed xenologs. Xenologs can have different functions if the new environment is vastly different for the horizontally moving gene. In general, though, xenologs typically have similar function in both organisms. The term was coined by Walter Fitch.
Homoeology
Homoeologous (also spelled homeologous) chromosomes or parts of chromosomes are those brought together following inter-species hybridization and allopolyploidization to form a hybrid genome, and whose relationship was completely homologous in an ancestral species. In allopolyploids, the homologous chromosomes within each parental sub-genome should pair faithfully during meiosis, leading to disomic inheritance; however in some allopolyploids, the homoeologous chromosomes of the parental genomes may be nearly as similar to one another as the homologous chromosomes, leading to tetrasomic inheritance (four chromosomes pairing at meiosis), intergenomic recombination, and reduced fertility.
Gametology
Gametology denotes the relationship between homologous genes on non-recombining, opposite sex chromosomes. The term was coined by García-Moreno and Mindell. 2000. Gametologs result from the origination of genetic sex determination and barriers to recombination between sex chromosomes. Examples of gametologs include CHDW and CHDZ in birds.
See also
Deep homology
EggNOG (database)
OrthoDB
Orthologous MAtrix (OMA)
PhEVER
Protein family
Protein superfamily
TreeFam
Syntelog
References
Evolutionary biology
Phylogenetics
Evolutionary developmental biology | Sequence homology | [
"Biology"
] | 2,751 | [
"Evolutionary biology",
"Phylogenetics",
"Bioinformatics",
"Taxonomy (biology)"
] |
1,443,002 | https://en.wikipedia.org/wiki/Environmental%20technology | Environmental technology (envirotech) is the use of engineering and technological approaches to understand and address issues that affect the environment with the aim of fostering environmental improvement. It involves the application of science and technology in the process of addressing environmental challenges through environmental conservation and the mitigation of human impact to the environment.
The term is sometimes also used to describe sustainable energy generation technologies such as photovoltaics, wind turbines, etc.
Purification and waste management
Water purification
Air purification
Air purification describes the processes used to remove contaminants and pollutants from the air to reduce the potential adverse effects on humans and the environment. The process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation.
Sewage treatment
Environmental remediation
Environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. The main focus is the reduction of hazardous substances within the environment. Some of the areas involved in environmental remediation include; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. There are three most common types of environmental remediation. These include soil, water, and sediment remediation.
Soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. Some examples of this are heavy metals, pesticides, and radioactive materials. Depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological.
Water remediation is one of the most important considering water is an essential natural resource. Depending on the source of water there will be different contaminants. Surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. There has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. The market for water remediation is expected to consistently increase to $19.6 billion by 2030.
Sediment remediation consists of removing contaminated sediments. Is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. To reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there's a risk of contamination resurfacing.
Solid waste management
Solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city/town. It refers to the collection, treatment, and disposal of non-soluble, solid waste material. Solid waste is associated with both industrial, institutional, commercial and residential activities. Hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. Some of the most common types of solid waste management include; landfills, vermicomposting, composting, recycling, and incineration. However, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution.
E-Waste Recycling
The recycling of electronic waste (e-waste) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. Traditional e-waste recycling methods, which often involve manual disassembly, expose workers to hazardous materials and are labor-intensive. Recent innovations have introduced automated processes that improve safety and efficiency, allowing for more precise separation and recovery of valuable materials.
Modern e-waste recycling techniques now leverage automated shredding and advanced sorting technologies, which help in effectively segregating different types of materials for recycling. This not only enhances the recovery rate of precious metals but also minimizes the environmental impact by reducing the amount of waste destined for landfills. Furthermore, research into biodegradable electronics aims to reduce future e-waste through the development of electronics that can decompose more naturally in the environment.
These advancements support a shift towards a circular economy, where the lifecycle of materials is extended, and environmental impacts are significantly minimized.
Bioremediation
Bioremediation is a process that uses microorganisms such as bacteria, fungi, plant enzymes, and yeast to neutrilize hazardous containments that can be in the environment. This could help mitigate a variety of environmental hazards, including oil spills, pesticides, heavy metals, and other pollutants. Bioremediation can be conducted either on-site ('in situ') or off-site ('ex situ') which is often necessary if the climate is too cold. Factors influencing the duration of bioremediation would include to the extent of the contamination, environmental conditions, and with timelines that can range from months to years.
Examples
Biofiltration
Bioreactor
Bioremediation
Composting toilet
Desalination
Thermal depolymerization
Pyrolysis
Sustainable energy
Concerns over pollution and greenhouse gases have spurred the search for sustainable alternatives to fossil fuel use. The global reduction of greenhouse gases requires the adoption of energy conservation as well as sustainable generation. That environmental harm reduction involves global changes such as:
substantially reducing methane emissions from melting perma-frost, animal husbandry, pipeline and wellhead leakage.
virtually eliminating fossil fuels for vehicles, heat, and electricity.
carbon dioxide capture and sequestration at point of combustion.
widespread use of public transport, battery, and fuel cell vehicles
extensive implementation of wind/solar/water generated electricity
reducing peak demands with carbon taxes and time of use pricing.
Since fuel used by industry and transportation account for the majority of world demand, by investing in conservation and efficiency (using less fuel), pollution and greenhouse gases from these two sectors can be reduced around the globe. Advanced energy-efficient electric motor (and electric generator) technology that are cost-effective to encourage their application, such as variable speed generators and efficient energy use, can reduce the amount of carbon dioxide (CO2) and sulfur dioxide (SO2) that would otherwise be introduced to the atmosphere, if electricity were generated using fossil fuels. Some scholars have expressed concern that the implementation of new environmental technologies in highly developed national economies may cause economic and social disruption in less-developed economies.
Renewable energy
Renewable energy is the energy that can be replenished easily. For years we have been using sources such as wood, sun, water, etc. for means for producing energy. Energy that can be produced by natural objects like the sun, wind, etc. is considered to be renewable. Technologies that have been in usage include wind power, hydropower, solar energy, geothermal energy, and biomass/bioenergy. It refers to any form of energy that naturally regenerates over time, and does not run out. This form of energy naturally replenishes and is characterized by a low carbon footprint. Some of the most common types of renewable energy sources include; solar power, wind power, hydroelectric power, and bioenergy which is generated by burning organic matter.
Examples
Energy saving modules
Heat pump
Hydrogen fuel cell
Hydroelectricity
Ocean thermal energy conversion
Photovoltaic
Solar power
Wave energy
Wind power
Wind turbine
Renewable Energy Innovations
The intersection of technology and sustainability has led to innovative solutions aimed at enhancing the efficiency of renewable energy systems. One such innovation is the integration of wind and solar power to maximize energy production. Companies like Unéole are pioneering technologies that combine solar panels with wind turbines on the same platform, which is particularly advantageous for urban environments with limited space. This hybrid system not only conserves space but also increases the energy yield by leveraging the complementary nature of solar and wind energy availability.
Furthermore, advancements in offshore wind technology have significantly increased the viability and efficiency of wind energy. Modern offshore wind turbines feature improvements in structural design and aerodynamics, which enhance their energy capture and reduce costs. These turbines are now more adaptable to various marine environments, allowing for greater flexibility in location and potentially reducing visual pollution. The floating wind turbines, for example, use tension leg platforms and spar buoys that can be deployed in deeper waters, significantly expanding the potential areas for wind energy generation
Such innovations not only advance the capabilities of individual renewable technologies but also contribute to a more resilient and sustainable energy grid. By optimizing the integration and efficiency of renewable resources, these technologies play a crucial role in the transition towards a sustainable energy future.
Energy conservation
Energy conservation is the utilization of devices that require smaller amounts of energy in order to reduce the consumption of electricity. Reducing the use of electricity causes less fossil fuels to be burned to provide that electricity. And it refers to the practice of using less energy through changes in individual behaviors and habits. The main emphasis for energy conservation is the prevention of wasteful use of energy in the environment, to enhance its availability. Some of the main approaches to energy conservation involve refraining from using devices that consume more energy, where possible.
eGain forecasting
Egain forecasting is a method using forecasting technology to predict the future weather's impact on a building. By adjusting the heat based on the weather forecast, the system eliminates redundant use of heat, thus reducing the energy consumption and the emission of greenhouse gases. It is a technology introduced by the eGain International, a Swedish company that intelligently balances building power consumption. The technology involves forecasting the amount of heating energy required by a building within a specific period, which results in energy efficiency and sustainability. eGain lowers building energy consumption and emissions while determining time for maintenance where inefficiencies are observed.
Solar Power
Computational sustainability
Sustainable Agriculture
Sustainable agriculture is an approach to farming that utilizes technology in a way that ensures food protection, while ensuring the long-term health and productivity of agricultural systems, ecosystems, and communities. Historically, technological advancements have significantly contributed to increasing agricultural productivity and reducing physical labor.
The National Institute of Food and Agriculture improves sustainable agriculture through the use of funded programs aimed at fulfilling human food and fiber needs, improving environmental quality, and preserving natural resources vital to the agricultural economy, optimizing the utilization of both nonrenewable and on-farm resources while integrating natural biological cycles and controls as appropriate, maintaining the economic viability of farm operations, and to foster an improved quality of life for farmers and society at large. Among its initiatives, the NIFA wants to improve farm and ranch practices, integrated pest management, rotational grazing, soil conservation, water quality/wetlands, cover crops, crop/landscape diversity, nutrient management, agroforestry, and alternative marketing.
Education
Courses aimed at developing graduates with some specific skills in environmental systems or environmental technology are becoming more common and fall into three broad classes:
Environmental Engineering or Environmental Systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment;
Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects (good and bad) of chemicals in the environment. Such awards can focus on mining processes pollutants and commonly also cover biochemical processes;
Environmental technology courses are oriented towards producing electronic, electrical, or electrotechnology graduates capable of developing devices and artifacts that can monitor, measure, model, and control environmental impact, including monitoring and managing energy generation from renewable sources and developing novel energy generation technologies.
See also
Appropriate technology
Bright green environmentalism
Eco-innovation
Ecological modernization
Ecosia
Ecotechnology
Environmentally friendly
Green development
Groasis Waterboxx
Ice house (building)
Information and communication technologies for environmental sustainability
Pulser Pump
Smog tower
Sustainable design
Sustainable energy
Sustainable engineering
Sustainable living
Sustainable technologies
Technology for sustainable development
The All-Earth Ecobot Challenge
Windcatcher
WIPO GREEN
References
Further reading
External links
Bright green environmentalism
Energy economics | Environmental technology | [
"Environmental_science"
] | 2,427 | [
"Energy economics",
"Environmental social science"
] |
1,443,370 | https://en.wikipedia.org/wiki/Ferrocement | Ferrocement or ferro-cement is a system of construction using reinforced mortar or plaster (lime or cement, sand, and water) applied over an "armature" of metal mesh, woven, expanded metal, or metal-fibers, and closely spaced thin steel rods such as rebar. The metal commonly used is iron or some type of steel, and the mesh is made with wire with a diameter between 0.5 mm and 1 mm. The cement is typically a very rich mix of sand and cement in a 3:1 ratio; when used for making boards, no gravel is used, so that the material is not concrete.
Ferrocement is used to construct relatively thin, hard, strong surfaces and structures in many shapes such as hulls for boats, shell roofs, and water tanks. Ferrocement originated in the 1840s in France and the Netherlands and is the precursor to reinforced concrete. It has a wide range of other uses, including sculpture and prefabricated building components. The term "ferrocement" has been applied by extension to other composite materials, including some containing no cement and no ferrous material.
The "Mulberry harbours" used in the D-Day landings were made of ferrocement, and their remains may still be seen at resorts like Arromanches.
Definitions
Cement and concrete are used interchangeably but there are technical distinctions and the meaning of cement has changed since the mid-nineteenth century when ferrocement originated. Ferro- means iron although metal commonly used in ferro-cement is the iron alloy steel. Cement in the nineteenth century and earlier meant mortar or broken stone or tile mixed with lime and water to form a strong mortar. Today cement usually means Portland cement, Mortar is a paste of a binder (usually Portland cement), sand and water; and concrete is a fluid mixture of Portland cement, sand, water and crushed stone aggregate which is poured into formwork (shuttering). Ferro-concrete is the original name of reinforced concrete (armored concrete) known at least since the 1890s and in 1903 it was well described in London's Society of Engineer's Journal but is now widely confused with ferrocement.
History
The inventors of ferrocement are Frenchmen Joseph Monier who dubbed it "ciment armé" (armored cement) and Joseph-Louis Lambot who constructed a boat with the system in 1848. Lambot exhibited the vessel at the Exposition Universelle in 1855 and his name for the material "ferciment" stuck. Lambot patented his boat in 1855 but the patent was granted in Belgium and only applied to that country. At the time of Monier's first patent, July 1867, he planned to use his material to create urns, planters, and cisterns. These implements were traditionally made from ceramics, but large-scale, kiln-fired projects were expensive and prone to failure. In 1875, Monier expanded his patents to include bridges and designed his first steel-and-concrete bridge. The outer layer was sculpted to mimic rustic logs and timbers, thereby also ushering faux bois (fake wood) concrete. In the first half of the twentieth century Italian Pier Luigi Nervi was noted for his use of ferro-cement, in Italian called ferro-cemento.
Ferroconcrete has relatively good strength and resistance to impact. When used in house construction in developing countries, it can provide better resistance to fire, earthquake, and corrosion than traditional materials, such as wood, adobe and stone masonry. It has been popular in developed countries for yacht building because the technique can be learned relatively quickly, allowing people to cut costs by supplying their own labor. In the 1930s through 1950s, it became popular in the United States as a construction and sculpting method for novelty architecture, examples of which are the Cabazon Dinosaurs and the works of Albert Vrana.
Construction formwork
The desired shape may be built from a multi-layered construction of mesh, supported by an armature, or grid, built with rebar and tied with wire. For optimum performance, steel should be rust-treated, (galvanized) or stainless steel. Over this finished framework, an appropriate mixture (grout or mortar) of Portland cement, sand and water and/or admixtures is applied to penetrate the mesh. During hardening, the assembly may be kept moist, to ensure that the concrete is able to set and harden slowly and to avoid developing cracks that can weaken the system. Steps should be taken to avoid trapped air in the internal structure during the wet stage of construction as this can also create cracks that will form as it dries. Trapped air will leave voids that allow water to collect and degrade (rust) the steel. Modern practice often includes spraying the mixture at pressure (a technique called shotcrete) or some other method of driving out trapped air.
Older structures that have failed offer clues to better practices. In addition to eliminating air where it contacts steel, modern concrete additives may include acrylic liquid "admixtures" to slow moisture absorption and increase shock resistance to the hardened product or to alter curing rates. These technologies, borrowed from the commercial tile installation trade, have greatly aided in the restoration of these structures. Chopped glass or poly fiber can be added to reduce crack development in the outer skin. (Chopped fiber could inhibit good penetration of the grout to steel mesh constructions. This should be taken into consideration and mitigated, or limited to use on outer subsequent layers. Chopped fibers may also alter or limit some wet sculpting techniques.)
Economics
The economic advantage of ferro concrete structures is that they are stronger and more durable than some traditional building methods. Ferro concrete structures can be built quickly, which can have economic advantages.
In India, ferro concrete is used often because the constructions made from it are more resistant to earthquakes. Earthquake resistance is dependent on good construction technique.
In the 1970s, designers adapted their yacht designs to the then very popular backyard building scheme of building a boat using ferrocement. Its big attraction was that for minimum outlay and costs, a reasonable application of skill, an amateur could construct a smooth, strong and substantial yacht hull. A ferro-cement hull can prove to be of similar or lower weight than a fiber reinforced plastic (fiberglass), aluminium, or steel hull.
There are basically three types of methods of ferrocement. They are following
Armature system: In this method the skeleton steel is welded to the desired shape on either of sides of which are tied several layers of stretched meshes. This is strong enough, so that mortar can be filled in by pressing for one side and temporarily supporting from the other side. Filling in of mortar can also be administered by pressing in the mortar from both the sides. In this method the skeletal steel (bars) are at centre of the section and as such they add to the dead weight of without any contribution to strength.
Closed mould systems: Several layers of meshes are tied together against the surface of the mould which holds them in position while mortar is being filled in. The mould may be removed after curing or may remain in position as a permanent part of a finished structure. If the mould is to be removed for reuse, releasing agent must be used.
Integrated mould system: Using minimum reinforcement any integral mould is first to be considered to act as a framework. On this mould layers of meshes are fixed on either side and plastering is done onto them from both sides. As the name suggests, the mould remains permanently as an integral part of the finished structure. (e.g. double T-sections for flooring, roofing, etc.) Precaution should be taken to have firm connection between the mould and the layers filled in later, so that finished product as a whole integral structural unit.
Advantages
The advantages of a well built ferro concrete construction are the low weight, maintenance costs, and long lifetime in comparison with purely steel constructions. However, meticulous building precision is considered crucial, especially with respect to the cementitious composition and the way in which it is applied in and on the framework, and how or if the framework has been treated to resist corrosion.
When a ferro concrete sheet is mechanically overloaded, it will tend to fold instead of break or crumble like stone or pottery. As a container, it may fail and leak but possibly hold together. Much depends on the techniques used in the construction.
Using the example of the Mulberry Harbours, pre-fabricated units could be made for ports (such as Jamestown on St Helena) where conventional civil engineering is difficult.
Disadvantages
The disadvantage of ferro concrete construction is the labor-intensive nature of it, which makes it expensive for industrial application in the western world. In addition, threats to degradation (rust) of the steel components is a possibility if air voids are left in the original construction, due to too dry a mixture of the concrete being applied, or not forcing the air out of the structure while it is in its wet stage of construction, through vibration, pressurized spraying techniques, or other means. These air voids can turn to pools of water as the cured material absorbs moisture. If the voids occur where there is untreated steel, the steel will rust and expand, causing the system to fail.
In modern practice, the advent of liquid acrylic additives and other advances to the grout mixture create slower moisture absorption over the older formulas, and also increase bonding strength to mitigate these failures. Restoration steps should include treatment to the steel to arrest rust, using practices for treating old steel common in auto body repair.
Insurance issues
During the 1960s in Australia, New Zealand and the UK, home boatbuilders realised that, for a given budget, ferrocement enabled a much larger hull than otherwise possible. However, some builders failed to realise that the hull forms only a minor part of the overall cost because a larger boat would have very much higher fitting-out costs. Consequently, several homebuilt ferrocement boats became unfinished projects, or if finished, then badly executed, overweight, lumpy "horrors". Realising that their boats were not merely disappointing but also unsaleable, some builders insured their boats and fraudulently scuppered them for compensation. Insurance companies have long memories of such frauds, and today, even for well-built ferrocement boats, it has become difficult to get insurance coverage for third-party risks, while comprehensive cover is virtually unattainable.
See also
Types of concrete
François Hennébique
François Coignet
Faux Bois
Concrete ship
Big Duck
References
External links
Ferrocement Educational Network
Barcos de ferrocemento (in Spanish)
Building materials
Reinforced concrete
Sculpture materials
Soil-based building materials
Cement | Ferrocement | [
"Physics",
"Engineering"
] | 2,234 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
1,443,820 | https://en.wikipedia.org/wiki/Penguin%20diagram | In quantum field theory, penguin diagrams are a class of Feynman diagrams which are important for understanding CP violating processes in the standard model. They refer to one-loop processes in which a quark temporarily changes flavor (via a W or Z loop), and the flavor-changed quark engages in some tree interaction, typically a strong one. For the interactions where some quark flavors (e.g., very heavy ones) have much higher interaction amplitudes than others, such as CP-violating or Higgs interactions, these penguin processes may have amplitudes comparable to or even greater than those of the direct tree processes. A similar diagram can be drawn for leptonic decays.
They were first isolated and studied by Mikhail Shifman, Arkady Vainshtein, and Valentin Zakharov.
The processes which they describe were first directly observed in 1991 and 1994 by the CLEO collaboration.
Origin of the name
John Ellis was the first to refer to a certain class of Feynman diagrams as "penguin diagrams" in a 1977 paper on b-quarks. The name came about in part due to their shape, and in part due to a legendary bar-room bet with Melissa Franklin. According to John Ellis:
See also
John Ellis
References
Electroweak theory
Diagrams | Penguin diagram | [
"Physics"
] | 262 | [
"Physical phenomena",
"Fundamental interactions",
"Electroweak theory"
] |
1,444,981 | https://en.wikipedia.org/wiki/Micro%20process%20engineering | Micro process engineering is the science of conducting chemical or physical processes (unit operations) inside small volumina,
typically inside channels with diameters of less than 1 mm
(microchannels) or other structures with sub-millimeter dimensions.
These processes are usually carried out in continuous flow mode, as opposed to batch production, allowing a throughput high enough to make micro process engineering a tool for chemical production. Micro process engineering is therefore not to be confused with microchemistry, which deals with very small overall quantities of matter.
The subfield of micro process engineering that deals with chemical
reactions, carried out in microstructured reactors or
"microreactors", is also known as microreaction technology.
The unique advantages of microstructured reactors or microreactors are enhanced heat transfer due to the large surface area-to-volume ratio, and enhanced mass transfer. For example, the length scale of diffusion processes is comparable to that of microchannels or even shorter, and efficient mixing of reactants can be achieved during very short times (typically milliseconds). The good heat transfer properties allow a precise temperature control of reactions. For example, highly
exothermic reactions can be conducted almost isothermally when the microstructured reactor contains a second set of microchannels ("cooling passage"), fluidically separated from the reaction channels ("reaction
passage"), through which a flow of cold fluid with sufficiently high
heat capacity is maintained. It is also possible to change the temperature of microstructured reactors very rapidly to intentionally achieve a non-isothermal behaviour.
Process intensification
While the dimensions of the individual channels are small, a micro process engineering device ("microstructured reactor") can contain many thousands of such channels, and the overall size of a microstructured reactor can be on the scale of meters. The objective of micro process engineering is not primarily to miniaturize production plants, but to increase yields and selectivities of chemical reactions, thus reducing the cost of chemical production. This goal can be achieved by either using chemical reactions that cannot be conducted in larger volumina, or by running chemical reactions at parameters (temperatures, pressures, concentrations) that are inaccessible in larger volumina due
to safety constraints. For example, the detonation of the stoichiometric mixture of two volume unit of hydrogen gas and
one volume unit of oxygen gas does not propagate in microchannels
with a sufficiently small diameter. This property is referred to as the
"intrinsic safety" of microstructured reactors. The improvement of yields and selectivities by using novel reactions or running reactions at more extreme parameters is known as "process intensification".
History
Historically, micro process engineering originated around the 1980s, when mechanical micromachining methods developed for the fabrication of uranium isotope separation nozzles were first applied to the manufacturing of compact heat exchangers at the Karlsruhe (Nuclear) Research Center.
See also
Flow chemistry
Microreactor
Chemical process engineering
Microtechnology | Micro process engineering | [
"Chemistry",
"Materials_science",
"Engineering"
] | 611 | [
"Chemical process engineering",
"Chemical engineering",
"Materials science",
"Microtechnology"
] |
29,103,307 | https://en.wikipedia.org/wiki/Alexandrov%27s%20uniqueness%20theorem | The Alexandrov uniqueness theorem is a rigidity theorem in mathematics, describing three-dimensional convex polyhedra in terms of the distances between points on their surfaces. It implies that convex polyhedra with distinct shapes from each other also have distinct metric spaces of surface distances, and it characterizes the metric spaces that come from the surface distances on polyhedra. It is named after Soviet mathematician Aleksandr Danilovich Aleksandrov, who published it in the 1940s.
Statement of the theorem
The surface of any convex polyhedron in Euclidean space forms a metric space, in which the distance between two points is measured by the length of the shortest path from one point to the other along the surface. Within a single shortest path, distances between pairs of points equal the distances between corresponding points of a line segment of the same length; a path with this property is known as a geodesic.
This property of polyhedral surfaces, that every pair of points is connected by a geodesic, is not true of many other metric spaces, and when it is true the space is called a geodesic space. The geodesic space formed from the surface of a polyhedron is called its development.
The polyhedron can be thought of as being folded from a sheet of paper (a net for the polyhedron) and it inherits the same geometry as the paper: for every point p within a face of the polyhedron, a sufficiently small open neighborhood of p will have the same distances as a subset of the Euclidean plane. The same thing is true even for points on the edges of the polyhedron: they can be modeled locally as a Euclidean plane folded along a line and embedded into three-dimensional space, but the fold does not change the structure of shortest paths along the surface. However, the vertices of the polyhedron have a different distance structure: the local geometry of a polyhedron vertex is the same as the local geometry at the apex of a cone. Any cone can be formed from a flat sheet of paper with a wedge removed from it by gluing together the cut edges where the wedge was removed. The angle of the wedge that was removed is called the angular defect of the vertex; it is a positive number less than 2. The defect of a polyhedron vertex can be measured by subtracting the face angles at that vertex from 2. For instance, in a regular tetrahedron, each face angle is /3, and there are three of them at each vertex, so subtracting them from 2 leaves a defect of at each of the four vertices.
Similarly, a cube has a defect of /2 at each of its eight vertices. Descartes' theorem on total angular defect (a form of the Gauss–Bonnet theorem) states that the sum of the angular defects of all the vertices is always exactly 4. In summary, the development of a convex polyhedron is geodesic, homeomorphic (topologically equivalent) to a sphere, and locally Euclidean except for a finite number of cone points whose angular defect sums to 4.
Alexandrov's theorem gives a converse to this description. It states that if a metric space is geodesic, homeomorphic to a sphere, and locally Euclidean except for a finite number of cone points of positive angular defect (necessarily summing to 4), then there exists a convex polyhedron whose development is the given space. Moreover, this polyhedron is uniquely defined from the metric: any two convex polyhedra with the same surface metric must be congruent to each other as three-dimensional sets.
Limitations
The polyhedron representing the given metric space may be degenerate: it may form a doubly-covered two-dimensional convex polygon (a dihedron) rather than a fully three-dimensional polyhedron. In this case, its surface metric consists of two copies of the polygon (its two sides) glued together along corresponding edges.
Although Alexandrov's theorem states that there is a unique convex polyhedron whose surface has a given metric, it may also be possible for there to exist non-convex polyhedra with the same metric. An example is given by the regular icosahedron: if five of its triangles are removed, and are replaced by five congruent triangles forming an indentation into the polyhedron, the resulting surface metric stays unchanged. This example uses the same creases for the convex and non-convex polyhedron, but that is not always the case. For instance, the surface of a regular octahedron can be re-folded along different creases into a non-convex polyhedron with 24 equilateral triangle faces, the Kleetope obtained by gluing square pyramids onto the squares of a cube. Six triangles meet at each additional vertex introduced by this refolding, so they have zero angular defect and remain locally Euclidean. In the illustration of an octahedron folded from four hexagons, these 24 triangles are obtained by subdividing each hexagon into six triangles.
The development of any polyhedron can be described concretely by a collection of two-dimensional polygons together with instructions for gluing them together along their edges to form a metric space, and the conditions of Alexandrov's theorem for spaces described in this way are easily checked. However, the edges where two polygons are glued together could become flat and lie in the interior of faces of the resulting polyhedron,
rather than becoming polyhedron edges. (For an example of this phenomenon, see the illustration of four hexagons glued to form an octahedron.) Therefore, even when the development is described in this way, it may not be clear what shape the resulting polyhedron has, what shapes its faces have, or even how many faces it has. Alexandrov's original proof does not lead to an algorithm for constructing the polyhedron (for instance by giving coordinates for its vertices) realizing the given metric space. In 2008, Bobenko and Izmestiev provided such an algorithm. Their algorithm can approximate the coordinates arbitrarily accurately, in pseudo-polynomial time.
Related results
One of the first existence and uniqueness theorems for convex polyhedra is Cauchy's theorem, which states that a convex polyhedron is uniquely determined by the shape and connectivity of its faces. Alexandrov's theorem strengthens this, showing that even if the faces are allowed to bend or fold, without stretching or shrinking, then their connectivity still determines the shape of the polyhedron. In turn, Alexandrov's proof of the existence part of his theorem uses a strengthening of Cauchy's theorem by Max Dehn to infinitesimal rigidity.
An analogous result to Alexandrov's holds for smooth convex surfaces: a two-dimensional Riemannian manifold whose Gaussian curvature is everywhere positive and totals 4 can be represented uniquely as the surface of a smooth convex body in three dimensions. The uniqueness of this representation is a result of Stephan Cohn-Vossen from 1927, with some regularity conditions on the surface that were removed in later research. Its existence was proven by Alexandrov, using an argument involving limits of polyhedral metrics. Aleksei Pogorelov generalized both these results, characterizing the developments of arbitrary convex bodies in three dimensions.
Another result of Pogorelov on the geodesic metric spaces derived from convex polyhedra is a version of the theorem of the three geodesics: every convex polyhedron has at least three simple closed quasigeodesics. These are curves that are locally straight lines except when they pass through a vertex, where they are required to have angles of less than on both sides of them.
The developments of ideal hyperbolic polyhedra can be characterized in a similar way to Euclidean convex polyhedra: every two-dimensional manifold with uniform hyperbolic geometry and finite area, combinatorially equivalent to a finitely-punctured sphere, can be realized as the surface of an ideal polyhedron.
References
Geodesic (mathematics)
Mathematics of rigidity
Theorems in convex geometry
Theorems in discrete geometry
Uniqueness theorems | Alexandrov's uniqueness theorem | [
"Physics",
"Mathematics"
] | 1,672 | [
"Mathematical theorems",
"Mathematics of rigidity",
"Theorems in convex geometry",
"Theorems in discrete mathematics",
"Mechanics",
"Theorems in geometry",
"Theorems in discrete geometry",
"Mathematical problems",
"Uniqueness theorems"
] |
29,107,838 | https://en.wikipedia.org/wiki/Mortality%20forecasting | Mortality forecasting refers to the art and science of determining likely future mortality rates. It is especially important in rich countries with a high proportion of aged people, since populations with lower mortality accumulate more pensions.
References
See also
Lee-Carter model
Life expectancy
Actuarial science
References
Actuarial science
Death
Forecasting | Mortality forecasting | [
"Mathematics"
] | 64 | [
"Applied mathematics",
"Actuarial science"
] |
29,109,196 | https://en.wikipedia.org/wiki/Ortman%20key | An Ortman key is a coupling device used to secure two adjacent cylindrical segments of a pressure vessel common in tactical rocket motors. An Ortman key is made of elongated rectangular metal bar stock, such as steel, and is inserted into juxtaposed annular grooves around the circumference of the mating parts. The Ortman key assembly is used in high-pressure applications where packaging, strength and mass are important.
The Edmund key is a common variant of the Ortman key which is similar except has a feature added to the end of the key to aid in extraction of the key from the assembly.
References
Google Patent Search Ortman Key G. Nathan 1961
Google Patent Search Rocket Motor 1961
Joinery
Mechanical engineering
Structural engineering | Ortman key | [
"Physics",
"Engineering"
] | 147 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Construction",
"Civil engineering",
"Mechanical engineering"
] |
21,546,940 | https://en.wikipedia.org/wiki/Waste%20heat%20recovery%20unit | A waste heat recovery unit (WHRU) is an energy recovery heat exchanger that transfers heat from process outputs at high temperature to another part of the process for some purpose, usually increased efficiency. The WHRU is a tool involved in cogeneration. Waste heat may be extracted from sources such as hot flue gases from a diesel generator, steam from cooling towers, or even waste water from cooling processes such as in steel cooling.
Heat recovery units
Waste heat found in the exhaust gas of various processes or even from the exhaust stream of a conditioning unit can be used to preheat the incoming gas. This is one of the basic methods for recovery of waste heat. Many steel making plants use this process as an economic method to increase the production of the plant with lower fuel demand. There are many different commercial recovery units for the transferring of energy from hot medium space to lower one:
Recuperators: This name is given to different types of heat exchanger that the exhaust gases are passed through, consisting of metal tubes that carry the inlet gas and thus preheating the gas before entering the process. The heat wheel is an example which operates on the same principle as a solar air conditioning unit.
Regenerators: This is an industrial unit that reuses the same stream after processing. In this type of heat recovery, the heat is regenerated and reused in the process.
Heat pipe exchanger: Heat pipes are one of the best thermal conductors. They have the ability to transfer heat hundred times more than copper. Heat pipes are mainly known in renewable energy technology as being used in evacuated tube collectors. The heat pipe is mainly used in space, process or air heating, in waste heat from a process is being transferred to the surrounding due to its transfer mechanism.
Thermal wheel or rotary heat exchanger: consists of a circular honeycomb matrix of heat absorbing material, which is slowly rotated within the supply and exhaust air streams of an air handling system.
Economizer: In case of process boilers, waste heat in the exhaust gas is passed along a recuperator that carries the inlet fluid for the boiler and thus decreases thermal energy intake of the inlet fluid.
Heat pumps: Using an organic fluid that boils at a low temperature means that energy could be regenerated from waste fluids.
Run around coil: comprises two or more multi-row finned tube coils connected to each other by a pumped pipework circuit.
Particulate filters (DPF) to capture emission by maintaining higher temperatures adjacent to the converter and tail pipes to reduce the amount of emissions from the exhaust.
A waste heat recovery boiler (WHRB) is different from a heat recovery steam generator (HRSG) in the sense that the heated medium does not change phase.
Heat to power units
According to a report done by Energetics Incorporated for the DOE in November 2004 titled Technology Roadmap and several others done by the European Commission, the majority of energy production from conventional and renewable resources are lost to the atmosphere due to onsite (equipment inefficiency and losses due to waste heat) and offsite (cable and transformers losses) losses, that sums to be around 66% loss in electricity value. Waste heat of different degrees could be found in final products of a certain process or as a by-product in industry such as the slag in steelmaking plants. Units or devices that could recover the waste heat and transform it into electricity are called WHRUs or heat to power units:
An organic Rankine cycle (ORC) unit uses an organic fluid as the working fluid. The fluid has a lower boiling point than water to allow it to boil at low temperature, to form a superheated gas that can drive the blade of a turbine and thus a generator.
Thermoelectric (Seebeck, Peltier, Thomson effects) units may also be called WHRU, since they use the heat differential between two plates to produce direct current (DC) power.
Shape-memory alloys can also be used to recover low temperature waste heat and convert it to mechanical action or electricity.
Applications
Traditionally, waste heat of low temperature range (0-120 °C, or typically under 100 °C) has not been used for electricity generation despite efforts by ORC companies, mainly because the Carnot efficiency is rather low (max. 18% for 90 °C heating and 20 °C cooling, minus losses, typically ending up with 5-7% net electricity).
Waste heat of medium (100-650 °C) and high (>650 °C) temperature could be used for the generation of electricity or mechanical work via different capturing processes.
Waste heat recovery system can also be used to fulfill refrigeration requirements of a trailer (for example). The configuration is easy as only a waste heat recovery boiler and absorption cooler is required. Furthermore, only low pressures and temperatures needed to be handled.
Advantages
The recovery process will add to the efficiency of the process and thus decrease the costs of fuel and energy consumption needed for that process.
Indirect benefits
Reduced pollution: Thermal pollution and air pollution will dramatically decrease since less flue gases of high temperature are emitted from the plant since most of the energy is recycled.
Reduced equipment sizes: As fuel consumption reduces, the control and security equipment for handling the fuel decreases. Also, filtering equipment for the gas is no longer needed in large sizes.
Reduced auxiliary energy consumption: Reduced equipment sizes means another reduction in the energy fed to those systems like pumps, filters, fans,...etc.
Disadvantages
Capital cost to implement a waste heat recovery system may outweigh the benefit gained in heat recovered. It is necessary to put a cost to the heat being offset.
Often waste heat is of low quality (temperature). It can be difficult to efficiently utilize the quantity of low quality heat contained in a waste heat medium.
Heat exchangers tend to be larger to recover significant quantities which increases capital cost.
Maintenance of equipment: Additional equipment requires additional maintenance cost.
Units add size and mass to overall power unit. Especially a consideration on the mobile power units of vehicles.
Examples
The Cyclone Waste Heat Engine is designed to generate electricity from recovered waste heat energy using a steam cycle.
International Wastewater Heat Exchange Systems is another company addressing waste heat recovery systems. Focused on multi-unit residential, publicly shared buildings, industrial applications and district energy systems, their systems use the energy in waste water for domestic hot water production, building space heating and cooling.
Motorsport series Formula One introduced waste heat recovery units in 2014 under the name MGU-H.
See also
Cogeneration or combined heat and power (CHP)
Heat recovery steam generator and organic Rankine cycle
Electric turbo compound
Exhaust heat recovery system
Thermal oxidizer
Pinch analysis
Waste-to-energy plant
References
Heat exchangers
Energy recovery
Renewable energy
Cooling technology
fr:Chaudière de récupération | Waste heat recovery unit | [
"Chemistry",
"Engineering"
] | 1,381 | [
"Chemical equipment",
"Heat exchangers"
] |
21,547,722 | https://en.wikipedia.org/wiki/Hinsberg%20oxindole%20synthesis | The Hinsberg oxindole synthesis is a method of preparing oxindoles from the bisulfite additions of glyoxal. It is named after its inventor Oscar Hinsberg.
See also
Friedel-Crafts alkylation
Stolle synthesis
Hinsberg reaction
References
Name reactions | Hinsberg oxindole synthesis | [
"Chemistry"
] | 61 | [
"Name reactions"
] |
21,552,091 | https://en.wikipedia.org/wiki/Ageliferin | Ageliferin is a chemical compound produced by some sponges. It was first isolated from Caribbean and then Okinawan marine sponges in the genus Agelas. It often co-exists with the related compound sceptrin and other similar compounds. It has antibacterial properties and can cause biofilms to dissolve.
See also
Agelas clathrodes
Agelas conifera
References
Halogen-containing alkaloids
Organobromides
Pyrroles
Benzimidazoles
Carboxamides | Ageliferin | [
"Chemistry"
] | 105 | [
"Halogen-containing alkaloids",
"Alkaloids by chemical classification"
] |
21,554,583 | https://en.wikipedia.org/wiki/Green%20body | A green body is an object whose main constituent is weakly bound clay material, usually in the form of bonded powder or plates before it has been sintered or fired.
In ceramic engineering, the most common method for producing ceramic components is to form a green body comprising a mixture of the ceramic material and various organic or inorganic additives, and then to fire it in a kiln to produce a strong, vitrified object. Additives can serve as solvents, dispersants (deflocculants), binders, plasticizers, lubricants, or wetting agents.
This method is used because of difficulties with the casting of ceramics — due to their extremely high melting temperature and viscosity (relative to other materials such as metals and polymers).
See also
Compaction of ceramic powders
Ceramic art
Pottery
Solid state chemistry
References
Ceramic materials | Green body | [
"Engineering"
] | 177 | [
"Ceramic engineering",
"Ceramic materials"
] |
33,164,829 | https://en.wikipedia.org/wiki/Golden%20LEAF%20Biomanufacturing%20Training%20and%20Education%20Center | The Golden LEAF Biomanufacturing Training and Education Center (BTEC) is a multidisciplinary instructional center at North Carolina State University that provides education and training to develop skilled professionals for the biomanufacturing industry. Biomanufacturing refers to the use of living organisms or other biological material to produce commercially viable products. Examples include therapeutic proteins, monoclonal antibodies, and vaccines for medical use; amino acids and enzymes for food manufacturing; and biofuels and biochemicals for industrial applications. BTEC provides hands-on education and training in bioprocessing concepts and biomanufacturing methods that comply with cGMP (current Good Manufacturing Practice), a set regulations published by the United States Food and Drug Administration (FDA).
BTEC reports administratively through the university's College of Engineering and is guided by an advisory board made up of representatives from the biomanufacturing industry and other organizations interested in biotechnology and biomanufacturing.
In 2003, North Carolina's Golden LEAF Foundation provided almost $39 million to build BTEC, as part of a larger grant to establish a statewide public-private partnership now called NCBioImpact. The state of North Carolina provided funds for process equipment and supports the operation of the facility. The NCBioImpact partnership now includes BTEC, BRITE (Biomanufacturing Research Institute and Technology Enterprise) at North Carolina Central University, North Carolina BioNetwork of the North Carolina Community College System, NCBIO (North Carolina Biosciences Organization), the North Carolina Biotechnology Center, and the Golden LEAF Foundation. It was created to provide workforce training and development for the biotechnology industry, thereby fostering the growth of this economic sector in the state. According to the North Carolina Biotechnology Center, North Carolina is home to 528 biotechnology companies that provide 57,000 jobs and $1.92 billion in taxes for state and local government. Employment in the industry has grown 4.1% from 2008 to 2010, when other industries shed thousands of jobs. In recent years some of the world's largest pharmaceutical companies, e.g. Novartis and Merck & Co, have located and/or expanded manufacturing operations in North Carolina.
Facilities and equipment
BTEC opened in fall 2007 and was the first facility dedicated to biomanufacturing training. BTEC is 82,500 gross square feet and contains 63,000-gross square feet of laboratories, which range from small or bench scale to large-scale suites that simulate a biomanufacturing pilot plant capable of producing biopharmaceutical products. Upstream processes utilize bacteria, yeast, animal cells, and insect cells. Equipment in these spaces includes the following:
Bioreactors, glass and stainless steel (2L - 300L) and disposable (10L - 250L)
Automation systems with distributed control architecture
Downstream recovery and purification equipment, and
Analytical instrumentation
The main BTEC facility is home to the North Carolina Community College System's BioNetwork Capstone Center, which operates an aseptic processing/filling suite and several bench-scale labs. In 2012, BTEC completed construction of additional laboratories in a nearby facility for cell culture, purification, and processing of active virus.
University programs
BTEC delivers undergraduate and graduate courses to North Carolina State University students. Academic programs include the following:
undergraduate certificate
undergraduate minor
post-baccalaureate certificate
graduate minor
a master's program offering two Professional Science Master's degrees, a Master of Science in Biomanufacturing (MS) and a Master of Biomanufacturing (MR)
Curriculum was created with extensive input from industry professionals, and most courses include substantial hands-on laboratory work. Most BTEC courses are offered in a half-semester (eight-week) format, which enables students to complete a series of courses in one academic year.
Job training
BTEC collaborates with industry partners to design, develop and deliver courses that provide professionals working for biomanufacturing companies, equipment vendors, or regulatory agencies with continuing education opportunities. Open-enrollment courses are offered throughout the year and are available to all interested parties. BTEC also regularly delivers courses customized to meet a client's specific needs for training.
BTEC provides biomanufacturing training specified in contracts of grants to provide training for government agencies. In 2007, the FDA awarded BTEC a 5-year contract to develop and deliver biomanufacturing training for FDA inspectors. In 2010, BTEC received a grant for almost $900,000 from Biomedical Advanced Research and Development Authority (BARDA), part of the United States Department of Health and Human Services. With funding from this grant, a team of instructors from BTEC, Duke University, and industry provide a three-week course on influenza vaccine manufacturing. Trainees were selected by institutions participating in a U.S. government-sponsored program to build vaccine production capacity among countries with developing economies. Countries represented included Egypt, India, Indonesia, Mexico, Romania, Serbia, Russia, South Korea, Thailand, and Vietnam.
Bioprocess and analytical services
When laboratories are not being used for training, BTEC uses them to perform a variety of services for scientists from industry, government, and academia. Projects involve technology development, process improvement/scale-up, analytical testing, and preparation of material for preclinical studies. These services allow scientists to advance their research projects toward commercialization. In turn, these advancements stimulate the North Carolina economy.
Location
BTEC is located on the Centennial Campus of North Carolina State University in Raleigh, North Carolina, United States. The campus is approximately 15 miles east of Research Triangle Park and Raleigh-Durham International Airport.
Advisory board
References
External links
Golden LEAF Biomanufacturing Training and Education Center (BTEC)
Biomanufacturing Research Institute and Technology Enterprise (BRITE)
Centennial Campus
College of Engineering at North Carolina State University
Golden LEAF Foundation
Merck
NCBioImpact
North Carolina BioNetwork
North Carolina Biosciences Organization (NCBIO)
North Carolina Biotechnology Center
North Carolina State University
Novartis
North Carolina State University
Biotechnology organizations
Biopharmaceuticals | Golden LEAF Biomanufacturing Training and Education Center | [
"Chemistry",
"Engineering",
"Biology"
] | 1,253 | [
"Pharmacology",
"Biotechnology organizations",
"Biotechnology products",
"Biopharmaceuticals"
] |
33,170,008 | https://en.wikipedia.org/wiki/HSC%20Sim | HSC Sim is a process simulator based on the HSC Chemistry software and databases. It has been implemented as a module to HSC Chemistry 7.0 in 2007 and can be used primarily for static process simulation. HSC stands for H ([enthalpy]), S ([entropy]) and Cp([heat capacity]).
Applications
HSC Sim has been primarily developed for the use in the mining and mineral industry, though other use such as modelling of biochemical and organic chemistry processes is possible as well.
In mineral industry the simulator is used for process operator training as an OTS (operator training simulator).
References
Outotec
HSC Chemistry webpage
Training Simulator for Flotation Process Operators
Simulation software
Chemical engineering software | HSC Sim | [
"Chemistry",
"Engineering"
] | 152 | [
"Chemical engineering software",
"Chemical engineering"
] |
33,170,217 | https://en.wikipedia.org/wiki/Ada%20regulon | In DNA repair, the Ada regulon is a set of genes whose expression is essential to adaptive response (also known as "Ada response", hence the name), which is triggered in prokaryotic cells by exposure to sub-lethal doses of alkylating agents. This allows the cells to tolerate the effects of such agents, which are otherwise toxic and mutagenic.
The Ada response includes the expression of four genes: ada, alkA, alkB, and aidB. The product of ada gene, the Ada protein, is an activator of transcription of all four genes. DNA bases damaged by alkylation are removed by distinct strategies.
Alkylating agents
The alkylating agents from a group of mutagens and carcinogens that modify DNA by alkylation. Alkyl base lesions can arrest replication, interrupt transcription, or signal the activation of cell cycle checkpoints or apoptosis. In mammals, they could be involved in carcinogenesis, neurodegenerative disease and aging.
The alkylating agents can introduce methyl or ethyl groups at all of the available nitrogen and oxygen atoms in DNA bases, providing a number of lesions.
The majority of evidence indicates that among the 11 identified base modification two, 3-methyladenine (3meA) and O6-methylguanine (O6-meG), are mainly responsible for the biological effects of alkylation agents.
Roles of ada-regulated genes
The Ada protein is composed of two major domains, a C-terminal domain and an N-terminal one, linked by a hinge region susceptible to proteolytic cleavage. These domains can function independently. AdaCTD transfers methyl adducts from O6-meG and O4-meG onto its Cys-321 residue, whereas AdaNTD demethylates methyl-phosphotriesters by methyl transfer onto its Cys-38 residue.
The alkA gene encodes a glycosylase that repairs a variety of lesions including N-7-Methylguanine and N-3-Methylpurines and O2-methyl pyrimidines. The AlkA protein removes a damaged base from the sugar-phosphate backbone by cleaving the glycosylic bond attaching the base to the sugar, producing an abasic site. Further processing of the abasic site by AP endonucleases, polymerase I, and ligase then completes the repair.
AlkB, one of the Escherichia coli adaptive response proteins, uses an α ketoglutarate/Fe(II)-dependent mechanism that, by chemical oxidation, removes a variety of alkyl lesions from DNA, thus affording protection of the genome against alkylation.
The AidB protein has been supposed to take part in the degradation of endogenous alkylating agents. It shows some homology to acyl-CoA oxidases and those containing flavins. Recent observations suggest that AidB may bind to double-stranded DNA and take part in its dealkylation. However, to determine the precise function of AidB further investigations are necessary.
Regulation of transcription
The Ada response includes the expression of four genes: ada, alkA, alkB, and aidB. The product of the ada gene, the Ada protein is an activator of transcription of all four genes.
Ada has two active methyl acceptor cysteine residues that are required for demethylation of DNA. Both sites can become methylated when Ada protein transfers the methyl group from the appropriate DNA lesions to itself. This reaction is irreversible and methylated Ada (me-Ada) can act as a transcriptional activator.
The Ada protein activates the transcription of the Ada regulon in two different ways. In case of the ada-alkB operon, and the aidB promoter, the N-terminal domain (AdaNTD) is involved in DNA binding and interacts with the a unit of RNA polymerase, whereas and the methylated C-terminal domain (me-AdaCTD) interacts with the σ70 subunit of RNA polymerase. Although these interactions are independent, both are necessary for transcription activation.
For activation of alkA gene, the AdaNTD interacts with both, the α and σ subunits of RNA polymerase, and activates transcription. In contrast to the ada and aidB promoters, the unmethylated form of the Ada protein, as well as methylated form of the AdaNTD, is able to activate the transcription at alkA.
Methylated Ada is able to activate transcription by σS as well as σ70 at both the ada and aidB promoters. In contrast, not only does me-Ada fail to stimulate alkA transcription by σS, but it negatively affects σS dependent transcription.
Intracellular concentrations of σS increase when the cells reach stationary phase; this in turn results in a me-Ada mediated decrease in the expression of AlkA. Therefore, an increase in expression of the adaptive response genes, in parallel with the expression of genes producing endogenous alkylators during the stationary phase, prevents alkylation damage to DNA and mutagenesis.
Homologues of the Ada regulon in humans
In human cells, the alkyltransferase activity is the product of the MGMT gene. The 21.7 kDa MGMT protein is built of amino-acid sequences very similar to those of E. coli alkyltransferases, like Ada. In contrast to the bacterial enzymes it mainly repairs O6meG, whereas removal of the alkyl adduct from O4meT is much slower and significantly less effective. The preferential repair of O6meG is profitable for eukaryotic cells since in experimental animals treated with alkylating carcinogens this lesion is involved in tumor stimulation.
Unlike the Ada and the human MGMT methyltransferases, AlkB and its human homologs hABH2 and hABH3 not only reverse alkylation base damage directly, but they do so catalytically and with a substrate specificity aimed at the base-pairing interface of the G:C and A:T base pairs. Crystal structures of AlkB and its human homologue hABH3 have shown similar overall folds, highlighting conserved functional domains.
References
DNA repair | Ada regulon | [
"Biology"
] | 1,314 | [
"Molecular genetics",
"DNA repair",
"Cellular processes"
] |
33,171,294 | https://en.wikipedia.org/wiki/Freddy%20Boey | Freddy Boey () is a Singaporean academic currently serving as the president of the City University of Hong Kong. Boey was previously the deputy president (innovation & enterprise) of the National University of Singapore (NUS), overseeing the university's initiatives and activities in the areas of innovation, entrepreneurship and research translation, as well as graduate studies. He was previously the senior vice president (graduate education and research translation) of NUS. Before joining NUS in 2018, Boey was deputy president and provost of Nanyang Technological University (NTU) from July 2011 to September 2017. Prior to these appointments, he was the chair of NTU's School of Materials Science and Engineering from 2005 to 2010.
Education
A professor of materials engineering, Boey graduated from Monash University, Australia, with a First Class Honours degree in materials engineering in 1980. He obtained his PhD in Chemistry and Engineering in 1987 from the National University of Singapore.
Research and innovation
Boey's research areas are in functional biomaterials for medical devices, nanomaterials and nanostructures for cell regeneration, sensing and energy storage. A keen inventor, he has filed 25 original patents with NTU, the majority of which have been licensed. He founded several companies to patent and license his creations, such as a surgical tissue retractor which was licensed to Insightra Medical Inc., Irvine, California, and sold in the United States, India, Japan and Europe.
Boey's other inventions include a fully biodegradable peripheral cardiovascular stent, micropumps for thermal management solutions in consumer electronic gadgets, microfluidic and biomedical devices a coronary stent with drug release capability, a hernia mesh and cardiac peptides for treating heart diseases. His innovations have garnered him numerous investment funding and research grants.
Among the companies Boey founded are Amaranth Medical Inc, Adcomp Technology Pte Ltd and Electroactiv Ltd. Boey also teamed up with the renowned Mayo Clinic to develop implants for the controlled release of cardiac peptides specially designed to treat heart diseases, through a joint start-up, CardioRev Pte Ltd. This company won early-stage funding under SPRING Singapore’s Technology Enterprise Commercialisation Scheme.
Boey has published 344 top journal papers with a citation of 7436 and h-index of 44. He won about S$36 million in competitive research grants between 2008 and 2011, including a S$10 million individual National Research Foundation (NRF) Competitive Research Programme grant for his work on fully biodegradable cardiovascular implants, a S$20 million NRF Technion-Singapore grant for his research in nanomedicine for cardiovascular diseases and a S$1.25 million grant from the NRF Translational Flagship Project for his research to make cataract surgery safer.
Teaching
During his tenure as chair of NTU's School of Materials Science and Engineering, Boey oversaw its transformation into one of the leading schools in the field. As an educator and mentor, Boey has supervised 33 PhD students and mentored 15 post-doctoral students. His current biomedical research team comprises 12 PhD students and more than 10 post-doctoral students and senior research fellows. About 15 of his past and current students and staff have been or are now involved in their own or his start-up companies.
As deputy president and provost, Boey has worked to enhance learning conditions for students. Among these initiatives are a S$45 million learning hub with modern study and social facilities, as well as upgraded learning spaces and resources across campus. He has also implemented new measures to ensure that teaching standards remain high, with potential faculty appointments having their teaching abilities reviewed on different levels.
Awards and appointments
Boey won the President's Science and Technology Medal in 2013, the highest scientific award in Singapore. He received the gold medal from the President Tony Tan Keng Yam during September 2013.
He was awarded Singapore's Public Administration Medal (Gold) by the Government of Singapore in 2016. He was also awarded Singapore's Public Administration Medal (Silver) by the Government of Singapore in 2010. He is a director on the boards of the Intellectual Property Office of Singapore, DSO National Laboratories and Temasek Laboratories@NTU, and is a founding member of the newly set up Singapore Academy of Engineers.
He is also a member of the SPRING Singapore Technology Policy Advisory Committee and has been on the President's Science and President's Technology Award committee since 2007.
Boey is on the panel of several national funding and award panels and has chaired the National A*STAR Grants Review Committee for the past few years. He is also Honorary Professor at both the University of Indonesia and the Nanjing University of Posts and Telecommunications.
An active member of the materials engineering community, Boey is a Fellow of the Institute of Materials, Minerals and Mining (United Kingdom) and Fellow of the Institute of Engineers Singapore, and was until recently the Deputy President of the Materials Research Society, Singapore.
Boey was an ad hoc member of NTU's inaugural University Academic Advisory Committee. He was also an appointed member of both the University Blue Ribbon Commission and the Blue Ribbon Implementation Commission.
In November 2011, Boey received the Distinguished Alumni of the Year Award from Monash University at the 50th anniversary celebrations of its Faculty of Engineering. The award recognises his achievements as a teacher, researcher and innovator, including his exceptional contributions to nanomedicine, as well as his active community engagement.
In December 2011, he received the Degree of Doctor of Technology from Loughborough University for his outstanding achievements as an engineer and academic leader.
In February 2013, he was awarded the prestigious Faculty of Medicine Fellowship by Imperial College London in recognition of his achievements in the field of biomedical engineering and his outstanding contributions to the development of the Lee Kong Chian School of Medicine.
References
External links
NTU website
NTU President’s Office
NTU School of Materials Science and Engineering
Year of birth missing (living people)
Living people
National University of Singapore alumni
Materials scientists and engineers
Recipients of the Pingat Pentadbiran Awam
Fellows of the Institute of Materials, Minerals and Mining | Freddy Boey | [
"Materials_science",
"Engineering"
] | 1,246 | [
"Materials scientists and engineers",
"Materials science"
] |
33,172,646 | https://en.wikipedia.org/wiki/Manuel%20Cardona | Manuel Cardona Castro (7 September 1934 – 2 July 2014) was a condensed matter physicist. According to the ISI Citations web database, Cardona was one of the eight most cited physicists since 1970. He specialized in solid state physics. Cardona's main interests were in the fields of: Raman scattering (and other optical spectroscopies) as applied to semiconductor microstructures, materials with tailor-made isotopic compositions, and high Tc superconductors, particularly investigations of electronic and vibronic excitations in the normal and superconducting state.
Academic career
Cardona was born in Barcelona, Spain in 1934. After obtaining a Masters in physics in 1955 from University of Barcelona Cardona was awarded a fellowship to work as a graduate student at Harvard University starting in 1956. At Harvard he began investigations of the dielectric properties of semiconductors, in particular germanium and silicon. With this work as a thesis he received a PhD in Applied Physics at Harvard. From 1959 till 1961 he continued similar work on III-V semiconductors at the RCA Laboratories in Zurich, Switzerland. In 1961 he moved to the RCA Labs in Princeton, NJ, where he continued work on the optical properties of semiconductors and started investigations of the microwave properties of superconductors. In 1964 he became a member of the Physics Faculty of Brown University (Providence, RI). In June–September 1965 he taught at the University of Buenos Aires under the auspices of the Ford Foundation. In 1971 he moved to Stuttgart, Germany as a founding director of the then-recently created Max Planck Institute for Solid State Research. Concomitantly he became scientific Member of the Max Planck Society, where he became emeritus in 2000.
From 1992 to 2004, Cardona served as chief editor of Solid State Communications.
Distinctions and honors
Besides receiving over at least 61 awards during his career, Cardona held eleven honorary doctorates. Some notable honors include:
1964 American Physical Society, Fellow
1982 Narcís Monturiol Medal, Government of Catalonia
1984 Frank Isakson Prize, American Physical Society
1984 Fellow, Japanese Society for the Promotion of Science
1984 Corresponding Member, Royal Academy of Sciences of Barcelona
1987 Member, National Academy of Sciences of the USA
1987 Grand Cross of Alfonso X el Sabio, Spain
1988 Prince of Asturias Award for Technical and Scientific Research, named after the Crown Prince of Spain
1991 Member, Academia Europaea
1994 Max Planck Research Prize, shared with E. E. Haller, Berkeley
1995 Corresponding Member, Spanish Royal Academy of Sciences
1997 John Wheatley Award, American Physical Society
1999 Ernst Mach Medal, Prague
2001 Nevill Mott Medal and Prize
2003 Matteucci Medal by the Accademia nazionale delle scienze, Italy
2009 Fellow, Royal Society of Canada
2011 Vernadsky Gold Medal of the National Academy of Sciences of Ukraine
2012 Paul Klemens Award, Phonons Conference, Ann Arbor, MI.
2012 Luis Federico Leloir Prize, Argentina
Publications
Cardona has authored over 1,300 scientific publications in international journals, ten monographs on solid state physics and co-authored a textbook on semiconductors. Since 1972, Cardona has served on the Board of Editors of at least seven journals, including being the Editor-in-Chief of Solid State Communications from 1992 to 2005.
Some of his works include:
Manuel Cardona: Modulation Spectroscopy, Academic Press 1969. Lib of Congress 55-12299
Manuel Cardona, Gernot Günterodt and Roberto Merlin: Light Scattering in Solids I-IX (nine volumes) Springer Verlag;
Pere Bonnin: Manuel Cardona i Castro, Fundació Catalana per a la Recerca, Barcelona 1998
Peter Y. Yu and Manuel Cardona, Fundamentals of semiconductors, 4 editions 1996-2000,
Personal life
He died in Stuttgart in 2014, where he lived since 1971 with his wife Inge Cardona (née Hecht). He held American, German and Spanish citizenship and had 3 children and 7 grandchildren.
References
External links
Marvin L. Cohen, Francisco de la Cruz, Lothar Ley, Miles V. Klein, Michael Thewalt, and Peter Y. Yu, "Biographical Memoirs of the National Academy of Sciences (2016)
1934 births
2014 deaths
Spanish physicists
Scientists from Barcelona
University of Barcelona alumni
Scientists from Catalonia
Foreign associates of the National Academy of Sciences
Members of the Lincean Academy
Harvard University alumni
Brown University faculty
Fellows of the American Physical Society
Fellows of the Royal Society of Canada
Spectroscopists
Academic journal editors
Max Planck Institute directors | Manuel Cardona | [
"Physics",
"Chemistry"
] | 918 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
5,256,483 | https://en.wikipedia.org/wiki/Electromagnetic%20field%20solver | Electromagnetic field solvers (or sometimes just field solvers) are specialized programs that solve (a subset of) Maxwell's equations directly. They form a part of the field of electronic design automation, or EDA, and are commonly used in the design of integrated circuits and printed circuit boards. They are used when a solution from first principles or the highest accuracy is required.
Introduction
The extraction of parasitic circuit models is essential for various aspects of physical verification such as timing, signal integrity, substrate coupling, and power grid analysis. As circuit speeds and densities have increased, the need has grown to account accurately for parasitic effects for more extensive and more complicated interconnect structures. In addition, the electromagnetic complexity has grown as well, from resistance and capacitance to inductance, and now even full electromagnetic wave propagation. This increase in complexity has also grown for the analysis of passive devices such as integrated inductors. Electromagnetic behavior is governed by Maxwell's equations, and all parasitic extraction requires solving some form of Maxwell's equations. That form may be a simple analytic parallel plate capacitance equation or may involve a full numerical solution for a complex 3D geometry with wave propagation. In layout extraction, analytic formulas for simple or simplified geometry can be used where accuracy is less important than speed. Still, when the geometric configuration is not simple, and accuracy demands do not allow simplification, a numerical solution of the appropriate form of Maxwell's equations must be employed.
The appropriate form of Maxwell's equations is typically solved by one of two classes of methods. The first uses a differential form of the governing equations and requires the discretization (meshing) of the entire domain in which the electromagnetic fields reside. Two of the most common approaches in this first class are the finite difference (FD) and finite element (FEM) methods. The resultant linear algebraic system (matrix) that must be solved is large but sparse (contains very few non-zero entries). Sparse linear solution methods, such as sparse factorization, conjugate-gradient, or multigrid methods can be used to solve these systems, the best of which require CPU time and memory of O(N) time, where N is the number of elements in the discretization. However, most problems in electronic design automation (EDA) are open problems, also called exterior problems, and since the fields decrease slowly towards infinity, these methods can require extremely large N.
The second class of methods are integral equation methods which instead require a discretization of only electromagnetic field sources. Those sources can be physical quantities, such as the surface charge density for the capacitance problem, or mathematical abstractions resulting from applying Green's theorem. When the sources exist only on two-dimensional surfaces for three-dimensional problems, the method is often called method of moments (MoM) or boundary element method (BEM). For open problems, the sources of the field exist in a much smaller domain than the fields themselves, and thus the size of linear systems generated by integral equations methods are much smaller than FD or FEM. Integral equation methods, however, generate dense (all entries are nonzero) linear systems, making such methods preferable to FD or FEM only for small problems. Such systems require O(n2) memory to store and O(n3) to solve via direct Gaussian elimination or, at best, O(n2) if solved iteratively. Increasing circuit speeds and densities require the solution of increasingly complicated interconnect, making dense integral equation approaches unsuitable due to these high growth rates of computational cost with increasing problem size.
In the past two decades, much work has gone into improving both the differential and integral equation approaches, as well as new approaches based on random walk methods. Methods of truncating the discretization required by the FD and FEM approaches has greatly reduced the number of elements required. Integral equation approaches have become particularly popular for interconnect extraction due to sparsification techniques, also sometimes called matrix compression, acceleration, or matrix-free techniques, which have brought nearly O(n) growth in storage and solution time to integral equation methods.
Sparsified integral equation techniques are typically used in the IC industry to solve capacitance and inductance extraction problems. The random-walk methods have become quite mature for capacitance extraction. For problems requiring the solution of the full Maxwell's equations (full-wave), both differential and integral equation approaches are common.
See also
Computational electromagnetics
Electronic design automation
Integrated circuit design
Standard Parasitic Exchange Format
Teledeltos
References
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field of electronic design automation. This summary was derived (with permission) from Vol II, Chapter 26, High Accuracy Parasitic Extraction, by Mattan Kamon and Ralph Iverson.
Electronic design
Electronic design automation
Electronic engineering
Integrated circuits
Computational electromagnetics | Electromagnetic field solver | [
"Physics",
"Technology",
"Engineering"
] | 1,018 | [
"Computational electromagnetics",
"Computer engineering",
"Electronic design",
"Computational physics",
"Electronic engineering",
"Electrical engineering",
"Design",
"Integrated circuits"
] |
5,260,042 | https://en.wikipedia.org/wiki/Flux%20qubit | In quantum computing, more specifically in superconducting quantum computing, flux qubits (also known as persistent current qubits) are micrometer sized loops of superconducting metal that is interrupted by a number of Josephson junctions. These devices function as quantum bits. The flux qubit was first proposed by Terry P. Orlando et al. at MIT in 1999 and fabricated shortly thereafter. During fabrication, the Josephson junction parameters are engineered so that a persistent current will flow continuously when an external magnetic flux is applied. Only an integer number of flux quanta are allowed to penetrate the superconducting ring, resulting in clockwise or counter-clockwise mesoscopic supercurrents (typically 300 nA) in the loop to compensate (screen or enhance) a non-integer external flux bias. When the applied flux through the loop area is close to a half integer number of flux quanta, the two lowest energy eigenstates of the loop will be a quantum superposition of the clockwise and counter-clockwise currents. The two lowest energy eigenstates differ only by the relative quantum phase between the composing current-direction states. Higher energy eigenstates correspond to much larger (macroscopic) persistent currents, that induce an additional flux quantum to the qubit loop, thus are well separated energetically from the lowest two eigenstates. This separation, known as the "qubit non linearity" criteria, allows operations with the two lowest eigenstates only, effectively creating a two level system. Usually, the two lowest eigenstates will serve as the computational basis for the logical qubit.
Computational operations are performed by pulsing the qubit with microwave frequency radiation which has an energy comparable to that of the gap between the energy of the two basis states, similar to RF-SQUID. Properly selected pulse duration and strength can put the qubit into a quantum superposition of the two basis states while subsequent pulses can manipulate the probability weighting that the qubit will be measured in either of the two basis states, thus performing a computational operation.
Fabrication
Flux qubits are fabricated using techniques similar to those used for microelectronics. The devices are usually made on silicon or sapphire wafers using electron beam lithography and metallic thin film evaporation processes. To create Josephson junctions, a technique known as shadow evaporation is normally used; this involves evaporating the source metal alternately at two angles through the lithography defined mask in the electron beam resist. This results in two overlapping layers of the superconducting metal, in between which a thin layer of insulator (normally aluminum oxide) is deposited.
Dr. Shcherbakova's group reported using niobium as the contacts for their flux qubits. Niobium is often used as the contact and is deposited by employing a sputtering technique and using optical lithography to pattern the contacts. An argon beam can then be used to reduce the oxide layer that forms on top of the contacts. The sample must be cooled during the etching process in order to keep the niobium contacts from melting. At this point, the aluminum layers can be deposited on top of the clean niobium surfaces. The aluminum is then deposited in two steps from alternating angles on the niobium contacts. An oxide layer forms between the two aluminum layers in order to create the Al/AlOx/Al Josephson junction. In standard flux qubits, 3 or 4 Josephson junctions will be patterned around the loop.
Resonators can be fabricated to measure the readout of the flux qubit through a similar techniques. The resonator can be fabricated by e-beam lithography and CF4 reactive ion etching of thin films of niobium or a similar metal. The resonator could then be coupled to the flux qubit by fabricating the flux qubit at the end of the resonator.
Flux Qubit Parameters
The flux qubit is distinguished from other known types of superconducting qubit such as the charge qubit or phase qubit by the coupling energy and charging energy of its junctions. In the charge qubit regime the charging energy of the junctions dominates the coupling energy. In a Flux qubit the situation is reversed and the coupling energy dominates. Typically for a flux qubit the coupling energy is 10-100 times greater than the charging energy which allows the Cooper pairs to flow continuously around the loop, rather than tunnel discretely across the junctions like a charge qubit.
Josephson Junctions
In order for a superconducting circuit to function as a qubit, there needs to be a non-linear element. If the circuit has a harmonic oscillator, such as in an LC-circuit, the energy levels are degenerate. This prohibits the formation of a two qubit computational space because any microwave radiation that is applied to manipulate the ground state and the first excited state to perform qubit operations would also excite the higher energy states. Josephson junctions are the only electronic element that are non-linear as well as non-dissipative at low temperatures . These are requirements for quantum integrated circuits, making the Josephson junction essential in the construction of flux qubits. Understanding the physics of the Josephson junction will improve comprehension of how flux qubits operate.
Essentially, Josephson junctions consist of two pieces of superconducting thin film that are separated by a layer of insulator. In the case of flux qubits, Josephson junctions are fabricated by the process that is described above. The wave functions of the superconducting components overlap, and this construction allows for the tunneling of electrons which creates a phase difference between the wave functions on either side of the insulating barrier. This phase difference that is equivalent to , where correspond to the wave functions on either side of the tunneling barrier. For this phase difference, the following Josephson relations have been established:
Here, is the Josephson current and is the flux quantum. By differentiating the current equation and using substitution, one obtains the Josephson inductance term :
From these equations, it can be seen that the Josephson inductance term is non-linear from the cosine term in the denominator; because of this, the energy level spacings are no longer degenerate, restricting the dynamics of the system to the two qubit states. Because of the non-linearity of the Josephson junction, operations using microwaves can be performed on the two lowest energy eigenvalue states (the two qubit states) without exciting the higher energy states. This was previously referred to as the "qubit non linearity" criteria. Thus, Josephson junctions are an integral element of flux qubits and superconducting circuits in general.
Coupling
Coupling between two or more qubits is essential to implement many-qubit gates. The two basic coupling mechanisms are the direct inductive coupling and coupling via a microwave resonator. In the direct coupling, the circulating currents of the qubits inductively affect one another - clockwise current in one qubit induces counter-clockwise current in the other. In the Pauli Matrices formalism, a term appears in the Hamiltonian, essential for the controlled NOT gate implementation. The direct coupling might be further enhanced by kinetic inductance, if the qubit loops are made to share an edge, so that the currents will flow through the same superconducting line. Inserting a Josephson junction on that joint line will add a Josephson inductance term, and increase the coupling even more. To implement a switchable coupling in the direct coupling mechanism, as required to implement a gate of finite duration, an intermediate coupling loop may be used. The control magnetic flux applied to the coupler loop switches the coupling on and off, as implemented, for example, in the D-Wave Systems machines. The second method of coupling uses an intermediate microwave cavity resonator, commonly implemented in a coplanar waveguide geometry. By tuning the energy separation of the qubits to match that of the resonator, the phases of the loop currents are synchronized, and a coupling is implemented. Tuning the qubits in and out of resonance (for example, by modifying their bias magnetic flux) controls the duration of the gate operation.
Readout
Like all quantum bits, flux qubits require a suitably sensitive probe coupled to it in order to measure its state after a computation has been carried out. Such quantum probes should introduce as little back-action as possible onto the qubit during measurement. Ideally they should be decoupled during computation and then turned "on" for a short time during read-out. Read-out probes for flux qubits work by interacting with one of the qubit's macroscopic variables, such as the circulating current, the flux within the loop or the macroscopic phase of the superconductor. This interaction then changes some variable of the read-out probe which can be measured using conventional low-noise electronics. The read-out probe is typically the technology aspect that separates the research of different University groups working on flux qubits.
Prof. Mooij's group at Delft in the Netherlands, along with collaborators, has pioneered flux qubit technology, and were the first to conceive, propose and implement flux qubits as they are known today. The Delft read-out scheme is based on a SQUID loop that is inductively coupled to the qubit, the qubit's state influences the critical current of the SQUID. The critical current can then be read-out using ramped measurement currents through the SQUID. Recently the group has used the plasma frequency of the SQUID as the read-out variable.
Dr. Il'ichev's group at IPHT Jena in Germany are using impedance measurement techniques based on the flux qubit influencing the resonant properties of a high quality tank circuit, which, like the Delft group is also inductively coupled to the qubit. In this scheme the qubit's magnetic susceptibility, which is defined by its state, changes the phase angle between the current and voltage when a small A.C. signal is passed into the tank circuit.
Prof. Petrashov's group at Royal Holloway are using an Andreev interferometer probe to read out flux qubits. This read-out uses the phase influence of a superconductor on the conductance properties of a normal metal. A length of normal metal is connected at either end to either side of the qubit using superconducting leads, the phase across the qubit, which is defined by its state, is translated into the normal metal, the resistance of which is then read-out using low noise resistance measurements.
Dr. Jerger's group uses resonators that are coupled with the flux qubit. Each resonator is dedicated to just one qubit, and all resonators can be measured with a single transmission line. The state of the flux qubit alters the resonant frequency of the resonator due to a dispersive shift that is picked up by the resonator from the coupling with the flux qubit. The resonant frequency is then measured by the transmission line for each resonator in the circuit. The state of the flux qubit is then determined by the measured shift in the resonant frequency.
References
Quantum information science
Quantum electronics
Superconductivity | Flux qubit | [
"Physics",
"Materials_science",
"Engineering"
] | 2,363 | [
"Physical quantities",
"Quantum electronics",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Nanotechnology",
"Electrical resistance and conductance"
] |
5,260,710 | https://en.wikipedia.org/wiki/Beta%20plane | In geophysical fluid dynamics, an approximation whereby the Coriolis parameter, f, is set to vary linearly in space is called a beta plane approximation.
On a rotating sphere such as the Earth, f varies with the sine of latitude; in the so-called f-plane approximation, this variation is ignored, and a value of f appropriate for a particular latitude is used throughout the domain. This approximation can be visualized as a tangent plane touching the surface of the sphere at this latitude.
A more accurate model is a linear Taylor series approximation to this variability about a given latitude :
, where is the Coriolis parameter at , is the Rossby parameter, is the meridional distance from , is the angular rotation rate of the Earth, and is the Earth's radius.
In analogy with the f-plane, this approximation is termed the beta plane, even though it no longer describes dynamics on a hypothetical tangent plane. The advantage of the beta plane approximation over more accurate formulations is that it does not contribute nonlinear terms to the dynamical equations; such terms make the equations harder to solve. The name 'beta plane' derives from the convention to denote the linear coefficient of variation with the Greek letter β.
The beta plane approximation is useful for the theoretical analysis of many phenomena in geophysical fluid dynamics since it makes the equations much more tractable, yet retains the important information that the Coriolis parameter varies in space. In particular, Rossby waves, the most important type of waves if one considers large-scale atmospheric and oceanic dynamics, depend on the variation of f as a restoring force; they do not occur if the Coriolis parameter is approximated only as a constant.
See also
Rossby parameter
Coriolis effect
Coriolis frequency
Baroclinic instability
Quasi-geostrophic equations
References
Holton, J. R., An introduction to dynamical meteorology, Academic Press, 2004. .
Pedlosky, J., Geophysical fluid dynamics, Springer-Verlag, 1992. .
Fluid dynamics
Atmospheric dynamics
Oceanography | Beta plane | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 419 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Atmospheric dynamics",
"Oceanography",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
945,225 | https://en.wikipedia.org/wiki/Isoperimetric%20dimension | In mathematics, the isoperimetric dimension of a manifold is a notion of dimension that tries to capture how the large-scale behavior of the manifold resembles that of a Euclidean space (unlike the topological dimension or the Hausdorff dimension which compare different local behaviors against those of the Euclidean space).
In the Euclidean space, the isoperimetric inequality says that of all bodies with the same volume, the ball has the smallest surface area. In other manifolds it is usually very difficult to find the precise body minimizing the surface area, and this is not what the isoperimetric dimension is about. The question we will ask is, what is approximately the minimal surface area, whatever the body realizing it might be.
Formal definition
We say about a differentiable manifold M that it satisfies a d-dimensional isoperimetric inequality if for any open set D in M with a smooth boundary one has
The notations vol and area refer to the regular notions of volume and surface area on the manifold, or more precisely, if the manifold has n topological dimensions then vol refers to n-dimensional volume and area refers to (n − 1)-dimensional volume. C here refers to some constant, which does not depend on D (it may depend on the manifold and on d).
The isoperimetric dimension of M is the supremum of all values of d such that M satisfies a d-dimensional isoperimetric inequality.
Examples
A d-dimensional Euclidean space has isoperimetric dimension d. This is the well known isoperimetric problem — as discussed above, for the Euclidean space the constant C is known precisely since the minimum is achieved for the ball.
An infinite cylinder (i.e. a product of the circle and the line) has topological dimension 2 but isoperimetric dimension 1. Indeed, multiplying any manifold with a compact manifold does not change the isoperimetric dimension (it only changes the value of the constant C). Any compact manifold has isoperimetric dimension 0.
It is also possible for the isoperimetric dimension to be larger than the topological dimension. The simplest example is the infinite jungle gym, which has topological dimension 2 and isoperimetric dimension 3. See for pictures and Mathematica code.
The hyperbolic plane has topological dimension 2 and isoperimetric dimension infinity. In fact the hyperbolic plane has positive Cheeger constant. This means that it satisfies the inequality
which obviously implies infinite isoperimetric dimension.
Consequences of isoperimetry
A simple integration over r (or sum in the case of graphs) shows that a d-dimensional isoperimetric inequality implies a d-dimensional volume growth, namely
where B(x,r) denotes the ball of radius r around the point x in the Riemannian distance or in the graph distance. In general, the opposite is not true, i.e. even uniformly exponential volume growth does not imply any kind of isoperimetric inequality. A simple example can be had by taking the graph Z (i.e. all the integers with edges between n and n + 1) and connecting to the vertex n a complete binary tree of height |n|. Both properties (exponential growth and 0 isoperimetric dimension) are easy to verify.
An interesting exception is the case of groups. It turns out that a group with polynomial growth of order d has isoperimetric dimension d. This holds both for the case of Lie groups and for the Cayley graph of a finitely generated group.
A theorem of Varopoulos connects the isoperimetric dimension of a graph to the rate of escape of random walk on the graph. The result states
Varopoulos' theorem: If G is a graph satisfying a d-dimensional isoperimetric inequality then
where is the probability that a random walk on G starting from x will be in y after n steps, and C is some constant.
References
Isaac Chavel, Isoperimetric Inequalities: Differential geometric and analytic perspectives, Cambridge university press, Cambridge, UK (2001),
Discusses the topic in the context of manifolds, no mention of graphs.
N. Th. Varopoulos, Isoperimetric inequalities and Markov chains, J. Funct. Anal. 63:2 (1985), 215–239.
Thierry Coulhon and Laurent Saloff-Coste, Isopérimétrie pour les groupes et les variétés, Rev. Mat. Iberoamericana 9:2 (1993), 293–314.
This paper contains the result that on groups of polynomial growth, volume growth and isoperimetric inequalities are equivalent. In French.
Fan Chung, Discrete Isoperimetric Inequalities. Surveys in Differential Geometry IX, International Press, (2004), 53–82. http://math.ucsd.edu/~fan/wp/iso.pdf.
This paper contains a precise definition of the isoperimetric dimension of a graph, and establishes many of its properties.
Mathematical analysis
Dimension | Isoperimetric dimension | [
"Physics",
"Mathematics"
] | 1,047 | [
"Geometric measurement",
"Mathematical analysis",
"Physical quantities",
"Theory of relativity",
"Dimension"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.