id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,286,452
https://en.wikipedia.org/wiki/Subgame
In game theory, a subgame is any part (a subset) of a game that meets the following criteria (the following terms allude to a game described in extensive form): It has a single initial node that is the only member of that node's information set (i.e. the initial node is in a singleton information set). If a node is contained in the subgame then so are all of its successors. If a node in a particular information set is in the subgame then all members of that information set belong to the subgame. It is a notion used in the solution concept of subgame perfect Nash equilibrium, a refinement of the Nash equilibrium that eliminates non-credible threats. The key feature of a subgame is that it, when seen in isolation, constitutes a game in its own right. When the initial node of a subgame is reached in a larger game, players can concentrate only on that subgame; they can ignore the history of the rest of the game (provided they know what subgame they are playing). This is the intuition behind the definition given above of a subgame. It must contain an initial node that is a singleton information set since this is a requirement of a game. Otherwise, it would be unclear where the player with first move should start at the beginning of a game (but see nature's choice). Even if it is clear in the context of the larger game which node of a non-singleton information set has been reached, players could not ignore the history of the larger game once they reached the initial node of a subgame if subgames cut across information sets. Furthermore, a subgame can be treated as a game in its own right, but it must reflect the strategies available to players in the larger game of which it is a subset. This is the reasoning behind 2 and 3 of the definition. All the strategies (or subsets of strategies) available to a player at a node in a game must be available to that player in the subgame the initial node of which is that node. Subgame perfection One of the principal uses of the notion of a subgame is in the solution concept subgame perfection, which stipulates that an equilibrium strategy profile be a Nash equilibrium in every subgame. In a Nash equilibrium, there is some sense in which the outcome is optimal - every player is playing a best response to the other players. However, in some dynamic games this can yield implausible equilibria. Consider a two-player game in which player 1 has a strategy S to which player 2 can play B as a best response. Suppose also that S is a best response to B. Hence, {S,B} is a Nash equilibrium. Let there be another Nash equilibrium {S',B'}, the outcome of which player 1 prefers and B' is the only best response to S'. In a dynamic game, the first Nash equilibrium is implausible (if player 1 moves first) because player 1 will play S', forcing the response (say) B' from player 2 and thereby attaining the second equilibrium (regardless of the preferences of player 2 over the equilibria). The first equilibrium is subgame imperfect because B does not constitute a best response to S' once S' has been played, i.e. in the subgame reached by player 1 playing S', B is not optimal for player 2. If not all strategies at a particular node were available in a subgame containing that node, it would be unhelpful in subgame perfection. One could trivially call an equilibrium subgame perfect by ignoring playable strategies to which a strategy was not a best response. Furthermore, if subgames cut across information sets, then a Nash equilibrium in a subgame might suppose a player had information in that subgame, he did not have in the larger game. References Game theory de:Teilspiel
Subgame
[ "Mathematics" ]
803
[ "Game theory" ]
1,287,577
https://en.wikipedia.org/wiki/Vertex%20operator%20algebra
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence. The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method. The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms. We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld. Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics. Formal definition Vertex algebra A vertex algebra is a collection of data that satisfy certain axioms. Data a vector space , called the space of states. The underlying field is typically taken to be the complex numbers, although Borcherds's original formulation allowed for an arbitrary commutative ring. an identity element , sometimes written or to indicate a vacuum state. an endomorphism , called "translation". (Borcherds's original formulation included a system of divided powers of , because he did not assume the ground ring was divisible.) a linear multiplication map , where is the space of all formal Laurent series with coefficients in . This structure has some alternative presentations: as an infinite collection of bilinear products where and , so that for each , there is an such that for . as a left-multiplication map . This is the 'state-to-field' map of the so-called state-field correspondence. For each , the endomorphism-valued formal distribution is called a vertex operator or a field, and the coefficient of is the operator . In the context of vertex algebras, a field is more precisely an element of , which can be written such that for any for sufficiently small (which may depend on ). The standard notation for the multiplication is Axioms These data are required to satisfy the following axioms: Identity. For any and . Translation. , and for any , Locality (Jacobi identity, or Borcherds identity). For any , there exists a positive integer such that: Equivalent formulations of locality axiom The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity: , where we define the formal delta series by: Borcherds initially used the following two identities: for any vectors u, v, and w, and integers m and n we have and . He later gave a more expansive version that is equivalent but easier to use: for any vectors u, v, and w, and integers m, n, and q we have Finally, there is a formal function version of locality: For any , there is an element such that and are the corresponding expansions of in and . Vertex operator algebra A vertex operator algebra is a vertex algebra equipped with a conformal element , such that the vertex operator is the weight two Virasoro field : and satisfies the following properties: , where is a constant called the central charge, or rank of . In particular, the coefficients of this vertex operator endow with an action of the Virasoro algebra with central charge . acts semisimply on with integer eigenvalues that are bounded below. Under the grading provided by the eigenvalues of , the multiplication on is homogeneous in the sense that if and are homogeneous, then is homogeneous of degree . The identity has degree 0, and the conformal element has degree 2. . A homomorphism of vertex algebras is a map of the underlying vector spaces that respects the additional identity, translation, and multiplication structure. Homomorphisms of vertex operator algebras have "weak" and "strong" forms, depending on whether they respect conformal vectors. Commutative vertex algebras A vertex algebra is commutative if all vertex operators commute with each other. This is equivalent to the property that all products lie in , or that . Thus, an alternative definition for a commutative vertex algebra is one in which all vertex operators are regular at . Given a commutative vertex algebra, the constant terms of multiplication endow the vector space with a commutative and associative ring structure, the vacuum vector is a unit and is a derivation. Hence the commutative vertex algebra equips with the structure of a commutative unital algebra with derivation. Conversely, any commutative ring with derivation has a canonical vertex algebra structure, where we set , so that restricts to a map which is the multiplication map with the algebra product. If the derivation vanishes, we may set to obtain a vertex operator algebra concentrated in degree zero. Any finite-dimensional vertex algebra is commutative. Thus even the smallest examples of noncommutative vertex algebras require significant introduction. Basic properties The translation operator in a vertex algebra induces infinitesimal symmetries on the product structure, and satisfies the following properties: , so is determined by . (skew-symmetry) For a vertex operator algebra, the other Virasoro operators satisfy similar properties: (quasi-conformality) for all . (Associativity, or Cousin property): For any , the element given in the definition also expands to in . The associativity property of a vertex algebra follows from the fact that the commutator of and is annihilated by a finite power of , i.e., one can expand it as a finite linear combination of derivatives of the formal delta function in , with coefficients in . Reconstruction: Let be a vertex algebra, and let be a set of vectors, with corresponding fields . If is spanned by monomials in the positive weight coefficients of the fields (i.e., finite products of operators applied to , where is negative), then we may write the operator product of such a monomial as a normally ordered product of divided power derivatives of fields (here, normal ordering means polar terms on the left are moved to the right). Specifically, More generally, if one is given a vector space with an endomorphism and vector , and one assigns to a set of vectors a set of fields that are mutually local, whose positive weight coefficients generate , and that satisfy the identity and translation conditions, then the previous formula describes a vertex algebra structure. Operator product expansion In vertex algebra theory, due to associativity, we can abuse notation to write, for This is the operator product expansion. Equivalently, Since the normal ordered part is regular in and , this can be written more in line with physics conventions as where the equivalence relation denotes equivalence up to regular terms. Commonly used OPEs Here some OPEs frequently found in conformal field theory are recorded. Examples from Lie algebras The basic examples come from infinite-dimensional Lie algebras. Heisenberg vertex operator algebra A basic example of a noncommutative vertex algebra is the rank 1 free boson, also called the Heisenberg vertex operator algebra. It is "generated" by a single vector b, in the sense that by applying the coefficients of the field b(z) := Y(b,z) to the vector 1, we obtain a spanning set. The underlying vector space is the infinite-variable polynomial ring , where for positive , acts obviously by multiplication, and acts as . The action of b0 is multiplication by zero, producing the "momentum zero" Fock representation V0 of the Heisenberg Lie algebra (generated by bn for integers n, with commutation relations [bn,bm]=n δn,–m), induced by the trivial representation of the subalgebra spanned by bn, n ≥ 0. The Fock space V0 can be made into a vertex algebra by the following definition of the state-operator map on a basis with each , where denotes normal ordering of an operator . The vertex operators may also be written as a functional of a multivariable function f as: if we understand that each term in the expansion of f is normal ordered. The rank n free boson is given by taking an n-fold tensor product of the rank 1 free boson. For any vector b in n-dimensional space, one has a field b(z) whose coefficients are elements of the rank n Heisenberg algebra, whose commutation relations have an extra inner product term: [bn,cm]=n (b,c) δn,–m. The Heisenberg vertex operator algebra has a one-parameter family of conformal vectors with parameter of conformal vectors given by with central charge . When , there is the following formula for the Virasoro character: This is the generating function for partitions, and is also written as q1/24 times the weight −1/2 modular form 1/η (the reciprocal of the Dedekind eta function). The rank n free boson then has an n parameter family of Virasoro vectors, and when those parameters are zero, the character is qn/24 times the weight −n/2 modular form η−n. Virasoro vertex operator algebra Virasoro vertex operator algebras are important for two reasons: First, the conformal element in a vertex operator algebra canonically induces a homomorphism from a Virasoro vertex operator algebra, so they play a universal role in the theory. Second, they are intimately connected to the theory of unitary representations of the Virasoro algebra, and these play a major role in conformal field theory. In particular, the unitary Virasoro minimal models are simple quotients of these vertex algebras, and their tensor products provide a way to combinatorially construct more complicated vertex operator algebras. The Virasoro vertex operator algebra is defined as an induced representation of the Virasoro algebra: If we choose a central charge c, there is a unique one-dimensional module for the subalgebra C[z]∂z + K for which K acts by cId, and C[z]∂z acts trivially, and the corresponding induced module is spanned by polynomials in L–n = –z−n–1∂z as n ranges over integers greater than 1. The module then has partition function . This space has a vertex operator algebra structure, where the vertex operators are defined by: and . The fact that the Virasoro field L(z) is local with respect to itself can be deduced from the formula for its self-commutator: where c is the central charge. Given a vertex algebra homomorphism from a Virasoro vertex algebra of central charge c to any other vertex algebra, the vertex operator attached to the image of ω automatically satisfies the Virasoro relations, i.e., the image of ω is a conformal vector. Conversely, any conformal vector in a vertex algebra induces a distinguished vertex algebra homomorphism from some Virasoro vertex operator algebra. The Virasoro vertex operator algebras are simple, except when c has the form 1–6(p–q)2/pq for coprime integers p,q strictly greater than 1 – this follows from Kac's determinant formula. In these exceptional cases, one has a unique maximal ideal, and the corresponding quotient is called a minimal model. When p = q+1, the vertex algebras are unitary representations of Virasoro, and their modules are known as discrete series representations. They play an important role in conformal field theory in part because they are unusually tractable, and for small p, they correspond to well-known statistical mechanics systems at criticality, e.g., the Ising model, the tri-critical Ising model, the three-state Potts model, etc. By work of Weiqang Wang concerning fusion rules, we have a full description of the tensor categories of unitary minimal models. For example, when c=1/2 (Ising), there are three irreducible modules with lowest L0-weight 0, 1/2, and 1/16, and its fusion ring is Z[x,y]/(x2–1, y2–x–1, xy–y). Affine vertex algebra By replacing the Heisenberg Lie algebra with an untwisted affine Kac–Moody Lie algebra (i.e., the universal central extension of the loop algebra on a finite-dimensional simple Lie algebra), one may construct the vacuum representation in much the same way as the free boson vertex algebra is constructed. This algebra arises as the current algebra of the Wess–Zumino–Witten model, which produces the anomaly that is interpreted as the central extension. Concretely, pulling back the central extension along the inclusion yields a split extension, and the vacuum module is induced from the one-dimensional representation of the latter on which a central basis element acts by some chosen constant called the "level". Since central elements can be identified with invariant inner products on the finite type Lie algebra , one typically normalizes the level so that the Killing form has level twice the dual Coxeter number. Equivalently, level one gives the inner product for which the longest root has norm 2. This matches the loop algebra convention, where levels are discretized by third cohomology of simply connected compact Lie groups. By choosing a basis Ja of the finite type Lie algebra, one may form a basis of the affine Lie algebra using Jan = Ja tn together with a central element K. By reconstruction, we can describe the vertex operators by normal ordered products of derivatives of the fields When the level is non-critical, i.e., the inner product is not minus one half of the Killing form, the vacuum representation has a conformal element, given by the Sugawara construction. For any choice of dual bases Ja, Ja with respect to the level 1 inner product, the conformal element is and yields a vertex operator algebra whose central charge is . At critical level, the conformal structure is destroyed, since the denominator is zero, but one may produce operators Ln for n ≥ –1 by taking a limit as k approaches criticality. Modules Much like ordinary rings, vertex algebras admit a notion of module, or representation. Modules play an important role in conformal field theory, where they are often called sectors. A standard assumption in the physics literature is that the full Hilbert space of a conformal field theory decomposes into a sum of tensor products of left-moving and right-moving sectors: That is, a conformal field theory has a vertex operator algebra of left-moving chiral symmetries, a vertex operator algebra of right-moving chiral symmetries, and the sectors moving in a given direction are modules for the corresponding vertex operator algebra. Definition Given a vertex algebra V with multiplication Y, a V-module is a vector space M equipped with an action YM: V ⊗ M → M((z)), satisfying the following conditions: (Identity) YM(1,z) = IdM (Associativity, or Jacobi identity) For any u, v ∈ V, w ∈ M, there is an element such that YM(u,z)YM(v,x)w and YM(Y(u,z–x)v,x)w are the corresponding expansions of in M((z))((x)) and M((x))((z–x)). Equivalently, the following "Jacobi identity" holds: The modules of a vertex algebra form an abelian category. When working with vertex operator algebras, the previous definition is sometimes given the name weak -module, and genuine V-modules must respect the conformal structure given by the conformal vector . More precisely, they are required to satisfy the additional condition that L0 acts semisimply with finite-dimensional eigenspaces and eigenvalues bounded below in each coset of Z. Work of Huang, Lepowsky, Miyamoto, and Zhang has shown at various levels of generality that modules of a vertex operator algebra admit a fusion tensor product operation, and form a braided tensor category. When the category of V-modules is semisimple with finitely many irreducible objects, the vertex operator algebra V is called rational. Rational vertex operator algebras satisfying an additional finiteness hypothesis (known as Zhu's C2-cofiniteness condition) are known to be particularly well-behaved, and are called regular. For example, Zhu's 1996 modular invariance theorem asserts that the characters of modules of a regular VOA form a vector-valued representation of . In particular, if a VOA is holomorphic, that is, its representation category is equivalent to that of vector spaces, then its partition function is -invariant up to a constant. Huang showed that the category of modules of a regular VOA is a modular tensor category, and its fusion rules satisfy the Verlinde formula. Heisenberg algebra modules Modules of the Heisenberg algebra can be constructed as Fock spaces for which are induced representations of the Heisenberg Lie algebra, given by a vacuum vector satisfying for , , and being acted on freely by the negative modes for . The space can be written as . Every irreducible, -graded Heisenberg algebra module with gradation bounded below is of this form. These are used to construct lattice vertex algebras, which as vector spaces are direct sums of Heisenberg modules, when the image of is extended appropriately to module elements. The module category is not semisimple, since one may induce a representation of the abelian Lie algebra where b0 acts by a nontrivial Jordan block. For the rank n free boson, one has an irreducible module Vλ for each vector λ in complex n-dimensional space. Each vector b ∈ Cn yields the operator b0, and the Fock space Vλ is distinguished by the property that each such b0 acts as scalar multiplication by the inner product (b, λ). Twisted modules Unlike ordinary rings, vertex algebras admit a notion of twisted module attached to an automorphism. For an automorphism σ of order N, the action has the form V ⊗ M → M((z1/N)), with the following monodromy condition: if u ∈ V satisfies σ u = exp(2πik/N)u, then un = 0 unless n satisfies n+k/N ∈ Z (there is some disagreement about signs among specialists). Geometrically, twisted modules can be attached to branch points on an algebraic curve with a ramified Galois cover. In the conformal field theory literature, twisted modules are called twisted sectors, and are intimately connected with string theory on orbifolds. Additional examples Vertex operator algebra defined by an even lattice The lattice vertex algebra construction was the original motivation for defining vertex algebras. It is constructed by taking a sum of irreducible modules for the Heisenberg algebra corresponding to lattice vectors, and defining a multiplication operation by specifying intertwining operators between them. That is, if is an even lattice (if the lattice is not even, the structure obtained is instead a vertex superalgebra), the lattice vertex algebra decomposes into free bosonic modules as: Lattice vertex algebras are canonically attached to double covers of even integral lattices, rather than the lattices themselves. While each such lattice has a unique lattice vertex algebra up to isomorphism, the vertex algebra construction is not functorial, because lattice automorphisms have an ambiguity in lifting. The double covers in question are uniquely determined up to isomorphism by the following rule: elements have the form for lattice vectors (i.e., there is a map to sending to α that forgets signs), and multiplication satisfies the relations eαeβ = (–1)(α,β)eβeα. Another way to describe this is that given an even lattice , there is a unique (up to coboundary) normalised cocycle with values such that , where the normalization condition is that ε(α, 0) = ε(0, α) = 1 for all . This cocycle induces a central extension of by a group of order 2, and we obtain a twisted group ring with basis , and multiplication rule – the cocycle condition on ensures associativity of the ring. The vertex operator attached to lowest weight vector in the Fock space is where is a shorthand for the linear map that takes any element of the α-Fock space to the monomial . The vertex operators for other elements of the Fock space are then determined by reconstruction. As in the case of the free boson, one has a choice of conformal vector, given by an element s of the vector space , but the condition that the extra Fock spaces have integer L0 eigenvalues constrains the choice of s: for an orthonormal basis , the vector 1/2 xi,12 + s2 must satisfy for all λ ∈ Λ, i.e., s lies in the dual lattice. If the even lattice is generated by its "root vectors" (those satisfying (α, α)=2), and any two root vectors are joined by a chain of root vectors with consecutive inner products non-zero then the vertex operator algebra is the unique simple quotient of the vacuum module of the affine Kac–Moody algebra of the corresponding simply laced simple Lie algebra at level one. This is known as the Frenkel–Kac (or Frenkel–Kac–Segal) construction, and is based on the earlier construction by Sergio Fubini and Gabriele Veneziano of the tachyonic vertex operator in the dual resonance model. Among other features, the zero modes of the vertex operators corresponding to root vectors give a construction of the underlying simple Lie algebra, related to a presentation originally due to Jacques Tits. In particular, one obtains a construction of all ADE type Lie groups directly from their root lattices. And this is commonly considered the simplest way to construct the 248-dimensional group E8. Monster vertex algebra The monster vertex algebra (also called the "moonshine module") is the key to Borcherds's proof of the Monstrous moonshine conjectures. It was constructed by Frenkel, Lepowsky, and Meurman in 1988. It is notable because its character is the j-invariant with no constant term, , and its automorphism group is the monster group. It is constructed by orbifolding the lattice vertex algebra constructed from the Leech lattice by the order 2 automorphism induced by reflecting the Leech lattice in the origin. That is, one forms the direct sum of the Leech lattice VOA with the twisted module, and takes the fixed points under an induced involution. Frenkel, Lepowsky, and Meurman conjectured in 1988 that is the unique holomorphic vertex operator algebra with central charge 24, and partition function . This conjecture is still open. Chiral de Rham complex Malikov, Schechtman, and Vaintrob showed that by a method of localization, one may canonically attach a bcβγ (boson–fermion superfield) system to a smooth complex manifold. This complex of sheaves has a distinguished differential, and the global cohomology is a vertex superalgebra. Ben-Zvi, Heluani, and Szczesny showed that a Riemannian metric on the manifold induces an N=1 superconformal structure, which is promoted to an N=2 structure if the metric is Kähler and Ricci-flat, and a hyperkähler structure induces an N=4 structure. Borisov and Libgober showed that one may obtain the two-variable elliptic genus of a compact complex manifold from the cohomology of the Chiral de Rham complex. If the manifold is Calabi–Yau, then this genus is a weak Jacobi form. Vertex algebra associated to a surface defect A vertex algebra can arise as a subsector of higher dimensional quantum field theory which localizes to a two real-dimensional submanifold of the space on which the higher dimensional theory is defined. A prototypical example is the construction of Beem, Leemos, Liendo, Peelaers, Rastelli, and van Rees which associates a vertex algebra to any 4d N=2 superconformal field theory. This vertex algebra has the property that its character coincides with the Schur index of the 4d superconformal theory. When the theory admits a weak coupling limit, the vertex algebra has an explicit description as a BRST reduction of a bcβγ system. Vertex operator superalgebras By allowing the underlying vector space to be a superspace (i.e., a Z/2Z-graded vector space ) one can define a vertex superalgebra by the same data as a vertex algebra, with 1 in V+ and T an even operator. The axioms are essentially the same, but one must incorporate suitable signs into the locality axiom, or one of the equivalent formulations. That is, if a and b are homogeneous, one compares Y(a,z)Y(b,w) with εY(b,w)Y(a,z), where ε is –1 if both a and b are odd and 1 otherwise. If in addition there is a Virasoro element ω in the even part of V2, and the usual grading restrictions are satisfied, then V is called a vertex operator superalgebra. One of the simplest examples is the vertex operator superalgebra generated by a single free fermion ψ. As a Virasoro representation, it has central charge 1/2, and decomposes as a direct sum of Ising modules of lowest weight 0 and 1/2. One may also describe it as a spin representation of the Clifford algebra on the quadratic space t1/2C[t,t−1](dt)1/2 with residue pairing. The vertex operator superalgebra is holomorphic, in the sense that all modules are direct sums of itself, i.e., the module category is equivalent to the category of vector spaces. The tensor square of the free fermion is called the free charged fermion, and by boson–fermion correspondence, it is isomorphic to the lattice vertex superalgebra attached to the odd lattice Z. This correspondence has been used by Date–Jimbo–Kashiwara-Miwa to construct soliton solutions to the KP hierarchy of nonlinear PDEs. Superconformal structures The Virasoro algebra has some supersymmetric extensions that naturally appear in superconformal field theory and superstring theory. The N=1, 2, and 4 superconformal algebras are of particular importance. Infinitesimal holomorphic superconformal transformations of a supercurve (with one even local coordinate z and N odd local coordinates θ1,...,θN) are generated by the coefficients of a super-stress–energy tensor T(z, θ1, ..., θN). When N=1, T has odd part given by a Virasoro field L(z), and even part given by a field subject to commutation relations By examining the symmetry of the operator products, one finds that there are two possibilities for the field G: the indices n are either all integers, yielding the Ramond algebra, or all half-integers, yielding the Neveu–Schwarz algebra. These algebras have unitary discrete series representations at central charge and unitary representations for all c greater than 3/2, with lowest weight h only constrained by h≥ 0 for Neveu–Schwarz and h ≥ c/24 for Ramond. An N=1 superconformal vector in a vertex operator algebra V of central charge c is an odd element τ ∈ V of weight 3/2, such that G−1/2τ = ω, and the coefficients of G(z) yield an action of the N=1 Neveu–Schwarz algebra at central charge c. For N=2 supersymmetry, one obtains even fields L(z) and J(z), and odd fields G+(z) and G−(z). The field J(z) generates an action of the Heisenberg algebras (described by physicists as a U(1) current). There are both Ramond and Neveu–Schwarz N=2 superconformal algebras, depending on whether the indexing on the G fields is integral or half-integral. However, the U(1) current gives rise to a one-parameter family of isomorphic superconformal algebras interpolating between Ramond and Neveu–Schwartz, and this deformation of structure is known as spectral flow. The unitary representations are given by discrete series with central charge c = 3-6/m for integers m at least 3, and a continuum of lowest weights for c > 3. An N=2 superconformal structure on a vertex operator algebra is a pair of odd elements τ+, τ− of weight 3/2, and an even element μ of weight 1 such that τ± generate G±(z), and μ generates J(z). For N=3 and 4, unitary representations only have central charges in a discrete family, with c=3k/2 and 6k, respectively, as k ranges over positive integers. Additional constructions Fixed point subalgebras: Given an action of a symmetry group on a vertex operator algebra, the subalgebra of fixed vectors is also a vertex operator algebra. In 2013, Miyamoto proved that two important finiteness properties, namely Zhu's condition C2 and regularity, are preserved when taking fixed points under finite solvable group actions. Current extensions: Given a vertex operator algebra and some modules of integral conformal weight, one may under favorable circumstances describe a vertex operator algebra structure on the direct sum. Lattice vertex algebras are a standard example of this. Another family of examples are framed VOAs, which start with tensor products of Ising models, and add modules that correspond to suitably even codes. Orbifolds: Given a finite cyclic group acting on a holomorphic VOA, it is conjectured that one may construct a second holomorphic VOA by adjoining irreducible twisted modules and taking fixed points under an induced automorphism, as long as those twisted modules have suitable conformal weight. This is known to be true in special cases, e.g., groups of order at most 3 acting on lattice VOAs. The coset construction (due to Goddard, Kent, and Olive): Given a vertex operator algebra V of central charge c and a set S of vectors, one may define the commutant C(V,S) to be the subspace of vectors v strictly commute with all fields coming from S, i.e., such that Y(s,z)v ∈ V[[z]] for all s ∈ S. This turns out to be a vertex subalgebra, with Y, T, and identity inherited from V. And if S is a VOA of central charge cS, the commutant is a VOA of central charge c–cS. For example, the embedding of SU(2) at level k+1 into the tensor product of two SU(2) algebras at levels k and 1 yields the Virasoro discrete series with p=k+2, q=k+3, and this was used to prove their existence in the 1980s. Again with SU(2), the embedding of level k+2 into the tensor product of level k and level 2 yields the N=1 superconformal discrete series. BRST reduction: For any degree 1 vector v satisfying v02=0, the cohomology of this operator has a graded vertex superalgebra structure. More generally, one may use any weight 1 field whose residue has square zero. The usual method is to tensor with fermions, as one then has a canonical differential. An important special case is quantum Drinfeld–Sokolov reduction applied to affine Kac–Moody algebras to obtain affine W-algebras as degree 0 cohomology. These W algebras also admit constructions as vertex subalgebras of free bosons given by kernels of screening operators. Related algebraic structures If one considers only the singular part of the OPE in a vertex algebra, one arrives at the definition of a Lie conformal algebra. Since one is often only concerned with the singular part of the OPE, this makes Lie conformal algebras a natural object to study. There is a functor from vertex algebras to Lie conformal algebras that forgets the regular part of OPEs, and it has a left adjoint, called the "universal vertex algebra" functor. Vacuum modules of affine Kac–Moody algebras and Virasoro vertex algebras are universal vertex algebras, and in particular, they can be described very concisely once the background theory is developed. There are several generalizations of the notion of vertex algebra in the literature. Some mild generalizations involve a weakening of the locality axiom to allow monodromy, e.g., the abelian intertwining algebras of Dong and Lepowsky. One may view these roughly as vertex algebra objects in a braided tensor category of graded vector spaces, in much the same way that a vertex superalgebra is such an object in the category of super vector spaces. More complicated generalizations relate to q-deformations and representations of quantum groups, such as in work of Frenkel–Reshetikhin, Etingof–Kazhdan, and Li. Beilinson and Drinfeld introduced a sheaf-theoretic notion of chiral algebra that is closely related to the notion of vertex algebra, but is defined without using any visible power series. Given an algebraic curve X, a chiral algebra on X is a DX-module A equipped with a multiplication operation on X×X that satisfies an associativity condition. They also introduced an equivalent notion of factorization algebra that is a system of quasicoherent sheaves on all finite products of the curve, together with a compatibility condition involving pullbacks to the complement of various diagonals. Any translation-equivariant chiral algebra on the affine line can be identified with a vertex algebra by taking the fiber at a point, and there is a natural way to attach a chiral algebra on a smooth algebraic curve to any vertex operator algebra. See also Operator algebra Zhu algebra Notes Citations Sources Conformal field theory Lie algebras Non-associative algebra
Vertex operator algebra
[ "Mathematics" ]
7,763
[ "Non-associative algebra", "Mathematical structures", "Algebraic structures" ]
1,287,818
https://en.wikipedia.org/wiki/Flensburg%20radar%20detector
The FuG 227 Flensburg was a German passive radar receiver developed by Siemens & Halske and introduced into service in early 1944. It used wing and tail-mounted dipole antennae and was sensitive to the mid-VHF band frequencies of 170–220 MHz, subharmonics of the Monica radar's 300 MHz transmissions. It allowed Luftwaffe nightfighters to home in on the Monica tail warning radar fitted to RAF bombers. An RAF bomber with Monica radar crashed in German-occupied territory in February 1943 allowing for the development of Flensburg. The British set was captured just seven days into its operational life in February 1943. On the morning of 13 July 1944, a Junkers Ju 88G-1 night fighter of 7.Staffel/NJG 2 equipped with Flensburg landed at RAF Woodbridge by mistake and was captured. When British military scientists examined the Flensburg equipment, they quickly realised its purpose and informed the RAF, who ordered Monica to be withdrawn from all RAF Bomber Command aircraft. Subsequently, further variants of Flensburg (Flensburg II to Flensburg VI) were developed for detecting Allied radar jammers. Only Flensburg II and III were used operationally. References Further reading External links Analysis report on the Ju 88 at Woodbridge (PDF format; 44 kB) World War II German radars Avionics Radar warning receivers Military equipment introduced from 1940 to 1944
Flensburg radar detector
[ "Technology" ]
278
[ "Warning systems", "Avionics", "Radar warning receivers", "Aircraft instruments" ]
1,288,985
https://en.wikipedia.org/wiki/Stueckelberg%20action
In field theory, the Stueckelberg action (named after Ernst Stueckelberg) describes a massive spin-1 field as an R (the real numbers are the Lie algebra of U(1)) Yang–Mills theory coupled to a real scalar field . This scalar field takes on values in a real 1D affine representation of R with as the coupling strength. This is a special case of the Higgs mechanism, where, in effect, and thus the mass of the Higgs scalar excitation has been taken to infinity, so the Higgs has decoupled and can be ignored, resulting in a nonlinear, affine representation of the field, instead of a linear representation — in contemporary terminology, a U(1) nonlinear -model. Gauge-fixing , yields the Proca action. This explains why, unlike the case for non-abelian vector fields, quantum electrodynamics with a massive photon is, in fact, renormalizable, even though it is not manifestly gauge invariant (after the Stückelberg scalar has been eliminated in the Proca action). Stueckelberg extension of the Standard Model The Stueckelberg extension of the Standard Model (StSM) consists of a gauge invariant kinetic term for a massive U(1) gauge field. Such a term can be implemented into the Lagrangian of the Standard Model without destroying the renormalizability of the theory and further provides a mechanism for mass generation that is distinct from the Higgs mechanism in the context of Abelian gauge theories. The model involves a non-trivial mixing of the Stueckelberg and the Standard Model sectors by including an additional term in the effective Lagrangian of the Standard Model given by The first term above is the Stueckelberg field strength, and are topological mass parameters and is the axion. After symmetry breaking in the electroweak sector the photon remains massless. The model predicts a new type of gauge boson dubbed which inherits a very distinct narrow decay width in this model. The St sector of the StSM decouples from the SM in limit . Stueckelberg type couplings arise quite naturally in theories involving compactifications of higher-dimensional string theory, in particular, these couplings appear in the dimensional reduction of the ten-dimensional N = 1 supergravity coupled to supersymmetric Yang–Mills gauge fields in the presence of internal gauge fluxes. In the context of intersecting D-brane model building, products of U(N) gauge groups are broken to their SU(N) subgroups via the Stueckelberg couplings and thus the Abelian gauge fields become massive. Further, in a much simpler fashion one may consider a model with only one extra dimension (a type of Kaluza–Klein model) and compactify down to a four-dimensional theory. The resulting Lagrangian will contain massive vector gauge bosons that acquire masses through the Stueckelberg mechanism. See also Higgs mechanism#Affine Higgs mechanism References The edited PDF files of the physics course of Professor Stueckelberg, openly accessible, with commentary and complete biographical documents. Review:Stueckelberg Extension of the Standard Model and the MSSM Boris Kors, Pran Nath: , , Daniel Feldman, Zuowei Liu, Pran Nath: , Gauge theories Physics beyond the Standard Model Symmetry Theoretical physics
Stueckelberg action
[ "Physics", "Mathematics" ]
703
[ "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Geometry", "Physics beyond the Standard Model", "Symmetry" ]
1,289,909
https://en.wikipedia.org/wiki/Washburn%27s%20equation
In physics, Washburn's equation describes capillary flow in a bundle of parallel cylindrical tubes; it is extended with some issues also to imbibition into porous materials. The equation is named after Edward Wight Washburn; also known as Lucas–Washburn equation, considering that Richard Lucas wrote a similar paper three years earlier, or the Bell-Cameron-Lucas-Washburn equation, considering J.M. Bell and F.K. Cameron's discovery of the form of the equation in 1906. Derivation In its most general form the Lucas Washburn equation describes the penetration length () of a liquid into a capillary pore or tube with time as , where is a simplified diffusion coefficient. This relationship, which holds true for a variety of situations, captures the essence of Lucas and Washburn's equation and shows that capillary penetration and fluid transport through porous structures exhibit diffusive behaviour akin to that which occurs in numerous physical and chemical systems. The diffusion coefficient is governed by the geometry of the capillary as well as the properties of the penetrating fluid. A liquid having a dynamic viscosity and surface tension will penetrate a distance into the capillary whose pore radius is following the relationship: Where is the contact angle between the penetrating liquid and the solid (tube wall). Washburn's equation is also used commonly to determine the contact angle of a liquid to a powder using a force tensiometer. In the case of porous materials, many issues have been raised both about the physical meaning of the calculated pore radius and the real possibility to use this equation for the calculation of the contact angle of the solid. The equation is derived for capillary flow in a cylindrical tube in the absence of a gravitational field, but is sufficiently accurate in many cases when the capillary force is still significantly greater than the gravitational force. In his paper from 1921 Washburn applies Poiseuille's Law for fluid motion in a circular tube. Inserting the expression for the differential volume in terms of the length of fluid in the tube , one obtains where is the sum over the participating pressures, such as the atmospheric pressure , the hydrostatic pressure and the equivalent pressure due to capillary forces . is the viscosity of the liquid, and is the coefficient of slip, which is assumed to be 0 for wetting materials. is the radius of the capillary. The pressures in turn can be written as where is the density of the liquid and its surface tension. is the angle of the tube with respect to the horizontal axis. is the contact angle of the liquid on the capillary material. Substituting these expressions leads to the first-order differential equation for the distance the fluid penetrates into the tube : Washburn's constant The Washburn constant may be included in Washburn's equation. It is calculated as follows: Fluid inertia In the derivation of Washburn's equation, the inertia of the liquid is ignored as negligible. This is apparent in the dependence of length to the square root of time, , which gives an arbitrarily large velocity dL/dt for small values of t. An improved version of Washburn's equation, called Bosanquet equation, takes the inertia of the liquid into account. Applications Inkjet printing The penetration of a liquid into the substrate flowing under its own capillary pressure can be calculated using a simplified version of Washburn's equation: where the surface tension-to-viscosity ratio represents the speed of ink penetration into the substrate. In reality, the evaporation of solvents limits the extent of liquid penetration in a porous layer and thus, for the meaningful modelling of inkjet printing physics it is appropriate to utilise models which account for evaporation effects in limited capillary penetration. Food According to physicist and Ig Nobel prize winner Len Fisher, the Washburn equation can be extremely accurate for more complex materials including biscuits. Following an informal celebration called national biscuit dunking day, some newspaper articles quoted the equation as Fisher's equation. Novel capillary pump The flow behaviour in traditional capillary follows the Washburn's equation. Recently, novel capillary pumps with a constant pumping flow rate independent of the liquid viscosity were developed, which have a significant advantage over the traditional capillary pump (of which the flow behaviour is Washburn behaviour, namely the flow rate is not constant). These new concepts of capillary pump are of great potential to improve the performance of lateral flow test. See also Bosanquet equation Mercury intrusion porosimetry (MIP) References External links Powder wettability measurement with the Washburn method Equations of fluid dynamics Porous media
Washburn's equation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
972
[ "Equations of fluid dynamics", "Equations of physics", "Porous media", "Materials science", "Fluid dynamics" ]
1,290,402
https://en.wikipedia.org/wiki/Canker
A plant canker is a small area of dead tissue, which grows slowly, often over years. Some cankers are of only minor consequence, but others are ultimately lethal and therefore can have major economic implications for agriculture and horticulture. Their causes include a wide range of organisms as fungi, bacteria, mycoplasmas and viruses. The majority of canker-causing organisms are bound to a unique host species or genus, but a few will attack other plants. Weather (via frost or windstorm damage) and animal damage can also cause stress to the plant resulting in cankers. Other causes of cankers is pruning when the bark is wet or using un-sterilized tools. Although fungicides or bactericides can treat some cankers, often the only available treatment is to destroy the infected plant to contain the disease. Examples Apple canker, caused by the fungus Neonectria galligena formerly Nectria galligena. Ash bacterial canker, now understood to be caused by the bacterium Pseudomonas savastanoi, rather than Pseudomonas syringae. After DNA-relatedness studies Pseudomonas savastanoi has been instated as a new species. Butternut canker, caused by the fungus Sirococcus clavigignenti-juglandacearum Bleeding canker of horse chestnut, caused by the bacterium Pseudomonas syringae pv. aesculi Citrus canker, caused by the bacterium Xanthomonas axonopodis Cypress canker, caused by the fungus Seiridium cardinale Foamy bark canker of oaks in California, caused by the fungus Geosmithia putterillii Dogwood anthracnose, caused by the fungus Discula destructiva Grape canker, caused by the fungus Eutypa lata Honey locust canker, caused by the fungus Thyronectria austro-americana Larch canker, caused by the fungus Lachnellula willkommii Mulberry canker, caused by the fungus Gibberella baccata Oak canker, caused by the fungus Diplodia quercina Pine pitch canker, caused by the fungus Fusarium circinatum Plane anthracnose, caused by the fungus Apiognomonia veneta Poplar canker, caused by the bacterium Xanthomonas populi Rapeseed stem canker, caused by the blackleg fungus Leptosphaeria maculans Rose cankers, caused by the fungus Leptosphaeria coniothyrium and Cryptosporella umbrina Scleroderris canker, caused by the fungus Gremmeniella abietina Southwest canker, caused by environmental conditions (frost damage and sun-scalding) Strawberry anthracnose, caused by the fungus species complexes Colletotrichum acutatum and C. gloeosporioides (incl. C. fragariae) Tomato anthracnose, caused by the fungus Colletotrichum coccodes Willow anthracnose, caused by the fungus Marssonina salicicola See also Forest pathology Burl or Burr References External links Canker Diseases of Trees Plant pathogens and diseases
Canker
[ "Biology" ]
667
[ "Plant pathogens and diseases", "Plants" ]
27,174,683
https://en.wikipedia.org/wiki/RLC%20circuit
An RLC circuit is an electrical circuit consisting of a resistor (R), an inductor (L), and a capacitor (C), connected in series or in parallel. The name of the circuit is derived from the letters that are used to denote the constituent components of this circuit, where the sequence of the components may vary from RLC. The circuit forms a harmonic oscillator for current, and resonates in a manner similar to an LC circuit. Introducing the resistor increases the decay of these oscillations, which is also known as damping. The resistor also reduces the peak resonant frequency. Some resistance is unavoidable even if a resistor is not specifically included as a component. RLC circuits have many applications as oscillator circuits. Radio receivers and television sets use them for tuning to select a narrow frequency range from ambient radio waves. In this role, the circuit is often referred to as a tuned circuit. An RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering. The RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements, R, L and C, can be combined in a number of different topologies. All three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some with practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance. Inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. Basic concepts Resonance An important property of this circuit is its ability to resonate at a specific frequency, the resonance frequency, . Frequencies are measured in units of hertz. In this article, angular frequency, , is used because it is more mathematically convenient. This is measured in radians per second. They are related to each other by a simple proportion, Resonance occurs because energy for this situation is stored in two different ways: in an electric field as the capacitor is charged and in a magnetic field as current flows through the inductor. Energy can be transferred from one to the other within the circuit and this can be oscillatory. A mechanical analogy is a weight suspended on a spring which will oscillate up and down when released. This is no passing metaphor; a weight on a spring is described by exactly the same second order differential equation as an RLC circuit and for all the properties of the one system there will be found an analogous property of the other. The mechanical property answering to the resistor in the circuit is friction in the spring–weight system. Friction will slowly bring any oscillation to a halt if there is no external force driving it. Likewise, the resistance in an RLC circuit will "damp" the oscillation, diminishing it with time if there is no driving AC power source in the circuit. The resonant frequency is defined as the frequency at which the impedance of the circuit is at a minimum. Equivalently, it can be defined as the frequency at which the impedance is purely real (that is, purely resistive). This occurs because the impedances of the inductor and capacitor at resonant are equal but of opposite sign and cancel out. Circuits where L and C are in parallel rather than series actually have a maximum impedance rather than a minimum impedance. For this reason they are often described as antiresonators; it is still usual, however, to name the frequency at which this occurs as the resonant frequency. Natural frequency The resonance frequency is defined in terms of the impedance presented to a driving source. It is still possible for the circuit to carry on oscillating (for a time) after the driving source has been removed or it is subjected to a step in voltage (including a step down to zero). This is similar to the way that a tuning fork will carry on ringing after it has been struck, and the effect is often called ringing. This effect is the peak natural resonance frequency of the circuit and in general is not exactly the same as the driven resonance frequency, although the two will usually be quite close to each other. Various terms are used by different authors to distinguish the two, but resonance frequency unqualified usually means the driven resonance frequency. The driven frequency may be called the undamped resonance frequency or undamped natural frequency and the peak frequency may be called the damped resonance frequency or the damped natural frequency. The reason for this terminology is that the driven resonance frequency in a series or parallel resonant circuit has the value. This is exactly the same as the resonance frequency of a lossless LC circuit – that is, one with no resistor present. The resonant frequency for a driven RLC circuit is the same as a circuit in which there is no damping, hence undamped resonant frequency. The resonant frequency peak amplitude, on the other hand, does depend on the value of the resistor and is described as the damped resonant frequency. A highly damped circuit will fail to resonate at all, when not driven. A circuit with a value of resistor that causes it to be just on the edge of ringing is called critically damped. Either side of critically damped are described as underdamped (ringing happens) and overdamped (ringing is suppressed). Circuits with topologies more complex than straightforward series or parallel (some examples described later in the article) have a driven resonance frequency that deviates from , and for those the undamped resonance frequency, damped resonance frequency and driven resonance frequency can all be different. Damping Damping is caused by the resistance in the circuit. It determines whether or not the circuit will resonate naturally (that is, without a driving source). Circuits that will resonate in this way are described as underdamped and those that will not are overdamped. Damping attenuation (symbol ) is measured in nepers per second. However, the unitless damping factor (symbol , zeta) is often a more useful measure, which is related to by The special case of is called critical damping and represents the case of a circuit that is just on the border of oscillation. It is the minimum damping that can be applied without causing oscillation. Bandwidth The resonance effect can be used for filtering, the rapid change in impedance near resonance can be used to pass or block signals close to the resonance frequency. Both band-pass and band-stop filters can be constructed and some filter circuits are shown later in the article. A key parameter in filter design is bandwidth. The bandwidth is measured between the cutoff frequencies, most frequently defined as the frequencies at which the power passed through the circuit has fallen to half the value passed at resonance. There are two of these half-power frequencies, one above, and one below the resonance frequency where is the bandwidth, is the lower half-power frequency and is the upper half-power frequency. The bandwidth is related to attenuation by where the units are radians per second and nepers per second respectively. Other units may require a conversion factor. A more general measure of bandwidth is the fractional bandwidth, which expresses the bandwidth as a fraction of the resonance frequency and is given by The fractional bandwidth is also often stated as a percentage. The damping of filter circuits is adjusted to result in the required bandwidth. A narrow band filter, such as a notch filter, requires low damping. A wide band filter requires high damping. factor The factor is a widespread measure used to characterise resonators. It is defined as the peak energy stored in the circuit divided by the average energy dissipated in it per radian at resonance. Low- circuits are therefore damped and lossy and high- circuits are underdamped and prone to amplitude extremes if driven at the resonant frequency. is related to bandwidth; low- circuits are wide-band and high- circuits are narrow-band. In fact, it happens that is the inverse of fractional bandwidth factor is directly proportional to selectivity, as the factor depends inversely on bandwidth. For a series resonant circuit (as shown below), the factor can be calculated as follows: where is the reactance either of or of at resonance, and Scaled parameters The parameters , , and are all scaled to . This means that circuits which have similar parameters share similar characteristics regardless of whether or not they are operating in the same frequency band. The article next gives the analysis for the series RLC circuit in detail. Other configurations are not described in such detail, but the key differences from the series case are given. The general form of the differential equations given in the series circuit section are applicable to all second order circuits and can be used to describe the voltage or current in any element of each circuit. Series circuit In this circuit, the three components are all in series with the voltage source. The governing differential equation can be found by substituting into Kirchhoff's voltage law (KVL) the constitutive equation for each of the three elements. From the KVL, where , and are the voltages across , , and , respectively, and is the time-varying voltage from the source. Substituting and into the equation above yields: For the case where the source is an unchanging voltage, taking the time derivative and dividing by leads to the following second order differential equation: This can usefully be expressed in a more generally applicable form: and are both in units of angular frequency. is called the neper frequency, or attenuation, and is a measure of how fast the transient response of the circuit will die away after the stimulus has been removed. Neper occurs in the name because the units can also be considered to be nepers per second, neper being a logarithmic unit of attenuation. is the angular resonance frequency. For the case of the series RLC circuit these two parameters are given by: A useful parameter is the damping factor, , which is defined as the ratio of these two; although, sometimes is not used, and is referred to as damping factor instead; hence requiring careful specification of one's use of that term. In the case of the series RLC circuit, the damping factor is given by The value of the damping factor determines the type of transient that the circuit will exhibit. Transient response The differential equation has the characteristic equation, The roots of the equation in -domain are, The general solution of the differential equation is an exponential in either root or a linear superposition of both, The coefficients and are determined by the boundary conditions of the specific problem being analysed. That is, they are set by the values of the currents and voltages in the circuit at the onset of the transient and the presumed value they will settle to after infinite time. The differential equation for the circuit solves in three different ways depending on the value of . These are overdamped (), underdamped (), and critically damped (). Overdamped response The overdamped response () is The overdamped response is a decay of the transient current without oscillation. Underdamped response The underdamped response () is By applying standard trigonometric identities the two trigonometric functions may be expressed as a single sinusoid with phase shift, The underdamped response is a decaying oscillation at frequency . The oscillation decays at a rate determined by the attenuation . The exponential in describes the envelope of the oscillation. and (or and the phase shift in the second form) are arbitrary constants determined by boundary conditions. The frequency is given by This is called the damped resonance frequency or the damped natural frequency. It is the frequency the circuit will naturally oscillate at if not driven by an external source. The resonance frequency, , which is the frequency at which the circuit will resonate when driven by an external oscillation, may often be referred to as the undamped resonance frequency to distinguish it. Critically damped response The critically damped response () is The critically damped response represents the circuit response that decays in the fastest possible time without going into oscillation. This consideration is important in control systems where it is required to reach the desired state as quickly as possible without overshooting. and are arbitrary constants determined by boundary conditions. Laplace domain The series RLC can be analyzed for both transient and steady AC state behavior using the Laplace transform. If the voltage source above produces a waveform with Laplace-transformed (where is the complex frequency ), the KVL can be applied in the Laplace domain: where is the Laplace-transformed current through all components. Solving for : And rearranging, we have Laplace admittance Solving for the Laplace admittance : Simplifying using parameters and defined in the previous section, we have Poles and zeros The zeros of are those values of where : The poles of are those values of where . By the quadratic formula, we find The poles of are identical to the roots and of the characteristic polynomial of the differential equation in the section above. General solution For an arbitrary , the solution obtained by inverse transform of is: In the underdamped case, : In the critically damped case, : In the overdamped case, : where , and and are the usual hyperbolic functions. Sinusoidal steady state Sinusoidal steady state is represented by letting , where is the imaginary unit. Taking the magnitude of the above equation with this substitution: and the current as a function of can be found from There is a peak value of . The value of at this peak is, in this particular case, equal to the undamped natural resonance frequency. This means that the maximum voltage across the resistor, and thus maximum heat dissipation, occurs at the natural frequency. From the frequency response of the current, the frequency response of the voltages across the various circuit elements can also be determined (see figure). Moreover, the maximum voltage across the capacitor happens at a frequency whereas the maximum voltage across the inductor occurs at It holds: . Parallel circuit The properties of the parallel RLC circuit can be obtained from the duality relationship of electrical circuits and considering that the parallel RLC is the dual impedance of a series RLC. Considering this, it becomes clear that the differential equations describing this circuit are identical to the general form of those describing a series RLC. For the parallel circuit, the attenuation is given by and the damping factor is consequently Likewise, the other scaled parameters, fractional bandwidth and are also reciprocals of each other. This means that a wide-band, low- circuit in one topology will become a narrow-band, high- circuit in the other topology when constructed from components with identical values. The fractional bandwidth and of the parallel circuit are given by Notice that the formulas here are the reciprocals of the formulas for the series circuit, given above. Frequency domain The complex admittance of this circuit is given by adding up the admittances of the components: The change from a series arrangement to a parallel arrangement results in the circuit having a peak in impedance at resonance rather than a minimum, so the circuit is an anti-resonator. The graph opposite shows that there is a minimum in the frequency response of the current at the resonance frequency when the circuit is driven by a constant voltage. On the other hand, if driven by a constant current, there would be a maximum in the voltage which would follow the same curve as the current in the series circuit. Other configurations A series resistor with the inductor in a parallel LC circuit as shown in Figure 4 is a topology commonly encountered where there is a need to take into account the resistance of the coil winding and its self-capacitance. Parallel LC circuits are frequently used for bandpass filtering and the is largely governed by this resistance. The resonant frequency of this circuit is This is the resonant frequency of the circuit defined as the frequency at which the admittance has zero imaginary part. The frequency that appears in the generalised form of the characteristic equation (which is the same for this circuit as previously) is not the same frequency. In this case it is the natural, undamped resonant frequency: The frequency , at which the impedance magnitude is maximum, is given by where is the quality factor of the coil. This can be well approximated by Furthermore, the exact maximum impedance magnitude is given by For values of this can be well approximated by In the same vein, a resistor in parallel with the capacitor in a series LC circuit can be used to represent a capacitor with a lossy dielectric. This configuration is shown in Figure 5. The resonant frequency (frequency at which the impedance has zero imaginary part) in this case is given by while the frequency at which the impedance magnitude is minimum is given by where . History The first evidence that a capacitor could produce electrical oscillations was discovered in 1826 by French scientist Felix Savary. He found that when a Leyden jar was discharged through a wire wound around an iron needle, sometimes the needle was left magnetized in one direction and sometimes in the opposite direction. He correctly deduced that this was caused by a damped oscillating discharge current in the wire, which reversed the magnetization of the needle back and forth until it was too small to have an effect, leaving the needle magnetized in a random direction. American physicist Joseph Henry repeated Savary's experiment in 1842 and came to the same conclusion, apparently independently. British scientist William Thomson (Lord Kelvin) in 1853 showed mathematically that the discharge of a Leyden jar through an inductance should be oscillatory, and derived its resonant frequency. British radio researcher Oliver Lodge, by discharging a large battery of Leyden jars through a long wire, created a tuned circuit with its resonant frequency in the audio range, which produced a musical tone from the spark when it was discharged. In 1857, German physicist Berend Wilhelm Feddersen photographed the spark produced by a resonant Leyden jar circuit in a rotating mirror, providing visible evidence of the oscillations. In 1868, Scottish physicist James Clerk Maxwell calculated the effect of applying an alternating current to a circuit with inductance and capacitance, showing that the response is maximum at the resonant frequency. The first example of an electrical resonance curve was published in 1887 by German physicist Heinrich Hertz in his pioneering paper on the discovery of radio waves, showing the length of spark obtainable from his spark-gap LC resonator detectors as a function of frequency. One of the first demonstrations of resonance between tuned circuits was Lodge's "syntonic jars" experiment around 1889 He placed two resonant circuits next to each other, each consisting of a Leyden jar connected to an adjustable one-turn coil with a spark gap. When a high voltage from an induction coil was applied to one tuned circuit, creating sparks and thus oscillating currents, sparks were excited in the other tuned circuit only when the inductors were adjusted to resonance. Lodge and some English scientists preferred the term "syntony" for this effect, but the term "resonance" eventually stuck. The first practical use for RLC circuits was in the 1890s in spark-gap radio transmitters to allow the receiver to be tuned to the transmitter. The first patent for a radio system that allowed tuning was filed by Lodge in 1897, although the first practical systems were invented in 1900 by Anglo Italian radio pioneer Guglielmo Marconi. Applications Variable tuned circuits A very frequent use of these circuits is in the tuning circuits of analogue radios. Adjustable tuning is commonly achieved with a parallel plate variable capacitor which allows the value of to be changed and tune to stations on different frequencies. For the IF stage in the radio where the tuning is preset in the factory, the more usual solution is an adjustable core in the inductor to adjust . In this design, the core (made of a high permeability material that has the effect of increasing inductance) is threaded so that it can be screwed further in, or screwed further out of the inductor winding as required. Filters In the filtering application, the resistor becomes the load that the filter is working into. The value of the damping factor is chosen based on the desired bandwidth of the filter. For a wider bandwidth, a larger value of the damping factor is required (and vice versa). The three components give the designer three degrees of freedom. Two of these are required to set the bandwidth and resonant frequency. The designer is still left with one which can be used to scale , and to convenient practical values. Alternatively, may be predetermined by the external circuitry which will use the last degree of freedom. Low-pass filter An RLC circuit can be used as a low-pass filter. The circuit configuration is shown in Figure 6. The corner frequency, that is, the frequency of the 3 dB point, is given by This is also the bandwidth of the filter. The damping factor is given by High-pass filter A high-pass filter is shown in Figure 7. The corner frequency is the same as the low-pass filter: The filter has a stop-band of this width. Band-pass filter A band-pass filter can be formed with an RLC circuit by either placing a series LC circuit in series with the load resistor or else by placing a parallel LC circuit in parallel with the load resistor. These arrangements are shown in Figures 8 and 9 respectively. The centre frequency is given by and the bandwidth for the series circuit is The shunt version of the circuit is intended to be driven by a high impedance source, that is, a constant current source. Under those conditions the bandwidth is Band-stop filter Figure 10 shows a band-stop filter formed by a series LC circuit in shunt across the load. Figure 11 is a band-stop filter formed by a parallel LC circuit in series with the load. The first case requires a high impedance source so that the current is diverted into the resonator when it becomes low impedance at resonance. The second case requires a low impedance source so that the voltage is dropped across the antiresonator when it becomes high impedance at resonance. Oscillators For applications in oscillator circuits, it is generally desirable to make the attenuation (or equivalently, the damping factor) as small as possible. In practice, this objective requires making the circuit's resistance as small as physically possible for a series circuit, or alternatively increasing to as much as possible for a parallel circuit. In either case, the RLC circuit becomes a good approximation to an ideal LC circuit. However, for very low-attenuation circuits (high -factor), issues such as dielectric losses of coils and capacitors can become important. In an oscillator circuit or equivalently As a result, Voltage multiplier In a series RLC circuit at resonance, the current is limited only by the resistance of the circuit If is small, consisting only of the inductor winding resistance say, then this current will be large. It will drop a voltage across the inductor of An equal magnitude voltage will also be seen across the capacitor but in antiphase to the inductor. If can be made sufficiently small, these voltages can be several times the input voltage. The voltage ratio is, in fact, the of the circuit, A similar effect is observed with currents in the parallel circuit. Even though the circuit appears as high impedance to the external source, there is a large current circulating in the internal loop of the parallel inductor and capacitor. Pulse discharge circuit An overdamped series RLC circuit can be used as a pulse discharge circuit. Often it is useful to know the values of components that could be used to produce a waveform. This is described by the form Such a circuit could consist of an energy storage capacitor, a load in the form of a resistance, some circuit inductance and a switch – all in series. The initial conditions are that the capacitor is at voltage, , and there is no current flowing in the inductor. If the inductance is known, then the remaining parameters are given by the following – capacitance: resistance (total of circuit and load): initial terminal voltage of capacitor: Rearranging for the case where is known – capacitance: inductance (total of circuit and load): initial terminal voltage of capacitor: See also RC circuit RL circuit Linear circuit Footnotes References Bibliography Analog circuits Electronic filter topology
RLC circuit
[ "Engineering" ]
5,198
[ "Analog circuits", "Electronic engineering" ]
27,177,206
https://en.wikipedia.org/wiki/Piezoresponse%20force%20microscopy
Piezoresponse force microscopy (PFM) is a variant of atomic force microscopy (AFM) that allows imaging and manipulation of piezoelectric/ferroelectric materials domains. This is achieved by bringing a sharp conductive probe into contact with a ferroelectric surface (or piezoelectric material) and applying an alternating current (AC) bias to the probe tip in order to excite deformation of the sample through the converse piezoelectric effect (CPE). The resulting deflection of the probe cantilever is detected through standard split photodiode detector methods and then demodulated by use of a lock-in amplifier (LiA). In this way topography and ferroelectric domains can be imaged simultaneously with high resolution. Basic principles General overview Piezoresponse force microscopy is a technique which since its inception and first implementation by Güthner and Dransfeld has steadily attracted more and more interest. This is due in large part to the many benefits and few drawbacks that PFM offers researchers in varying fields from ferroelectrics, semiconductors and even biology. In its most common format PFM allows for identification of domains from relatively large scale e.g. 100×100 μm2 scans right down to the nanoscale with the added advantage of simultaneous imaging of sample surface topography. Also possible is the ability to switch regions of ferroelectric domains with the application of a sufficiently high bias to the probe which opens up the opportunity of investigating domain formation on nanometre length scales with nanosecond time resolution. Many recent advances have expanded the list of applications for PFM and further increased this powerful technique. Indeed what started as a user modified AFM has now attracted the attention of the major SPM manufacturers so much so that in fact many now supply ‘ready-made’ systems specifically for PFM each with novel features for research. This is testament to the growth of the field and reflects the numbers of users throughout the scientific world who are at the forefront of scientific research. Consider that a static or DC voltage applied to a piezoelectric surface will produce a displacement but as applied fields are quite low and the piezoelectric tensor coefficients are relatively small then the physical displacement will also be small such that it is below the level of possible detection of the system. Take as an example, the d33 piezoelectric tensor coefficient of BaTiO3, it has a value of 85.6 pmV−1 meaning that applying 1 V across the material results in a displacement of 85.6 pm or 0.0856 nm, a minute cantilever displacement even for the high precision of AFM deflection detection. In order to separate this low level signal from random noise a lock-in technique is used wherein a modulated voltage reference signal, of frequency ω and amplitude Vac is applied to the tip giving rise to an oscillatory deformation of the sample surface, from the equilibrium position d0 with amplitude D, and an associated phase difference φ. The resulting movement of the cantilever is detected by the photodiode and so an oscillating surface displacement is converted into an oscillating voltage. A lock-in-amplifier (LiA) is then able to retrieve the amplitude and phase of the CPE induced surface deformation by the process outlined below. Converse piezoelectric effect The converse piezoelectric effect (CPE) describes how an applied electric field will create a resultant strain which in turn leads to a physical deformation of the material. This effect can be described through the constitutive equations. The CPE can be written as where Xi is the strain tensor, dki is the piezoelectric tensor, and Ek is the electric field. If the piezoelectric tensor is considered to be that of the tetragonal crystal system (that of BaTiO3) then it is such that the equation will lead to the strain components for an applied field. If the field is applied exclusively in one direction i.e. E3 for example, then the resulting strain components are: d31E3, d32E3, d33E3 Thus for an electric field applied along the c-axis of BaTiO3 i.e. E3, then the resulting deformation of the crystal will be an elongation along the c-axis and an axially symmetric contraction along the other orthogonal directions. PFM uses the effect of this deformation to detect domains and also to determine their orientation. Conductive probe The most important property of the probe for use in PFM is that it should be conducting. This is generally required in order to provide a means of applying a bias to the sample, and can be achieved through manufacturing standard silicon probes and coating them in a conductive material. Common coatings are platinum, gold, tungsten and even conductive diamond. Lock-in amplifier In the general case a lock-in amplifier (LiA) ‘compares’ an input signal against that of a reference signal (either generated internally or supplied by an external function generator) in order to separate the information contained in the input signal at the frequency of the reference signal. This is called demodulation and is done in a number of easy steps. The reference signal , and input signal, , are multiplied together to give the demodulator output, where A is the input signal Amplitude and B is the reference signal Amplitude, ω is the frequency of both the reference and input signals, and φ is any phase shift between the two signals. The above equation has an AC component at twice the frequency of the original signals (second term) and a DC component (first term) whose value is related to both the amplitude and phase of the input signal. The demodulator output is sent through a low-pass filter to remove the 2ω component and leave the DC component then the signal is integrated over a period of time defined as the Time Constant, τLiA which is a user-definable parameter. Several different outputs are commonly available from a LiA: X output is the demodulator output and Y is the second demodulator output which is shifted by 90° in reference to the first output, together they hold both the phase, θ, and magnitude, R, information and are given by and However, phase and amplitude of the input signal can also be calculated and made output from the LiA if desired, so that the full amount of information is available. The phase output can be determined from the following equation: The magnitude is then given by: This allows R to be calculated even if the input signal differs in phase from the reference signal. Differentiating vertical and lateral PFM signals A basic interpretation of PFM (which is generally accepted) identifies that two modes of imaging are possible, one that is sensitive to out-of-plane and one to in-plane piezoresponse, termed, vertical and lateral PFM (VPFM and LPFM) respectively. The separation of these components is possible through the use of a split photodiode detector, standard to all optical detection AFM systems. In this setup the detector is split into quadrants, nominally A, B, C and D. The centre of the entire detector outputs 0 V but as the laser spot moves a radial distance from this centre point, the magnitude of the voltage in output will increase linearly. A vertical deflection can be defined as {(A+B)-(C+D)}/(ABCD) so that now positive and negative voltages are ascribed to positive and negative cantilever vertical displacements. Similarly a lateral deflection is defined as {(B+D)-(A+C)}/(ABCD) to describe positive and negative torsional movements of the cantilever. So VPFM will utilise the vertical deflection signal from the photodiode detector so will only be sensitive to out-of-plane polar components and LPFM will utilise the lateral deflection signal from the photodiode and will only be sensitive to in-plane polar components. For polar components orientated such that they are parallel to the electric field the resulting oscillating movement will be entirely in-phase with the modulated electric field but for an anti-parallel alignment the motion will be 180° out-of-phase. In this way it is possible to determine the orientation of the vertical components of polarisation from analysis of the phase information, φ, contained in the input signal, readily available after demodulation in the LiA, when using the VPFM mode. In a similar sense the orientations of in-plane polar components can also be determined from the phase difference when using the LPFM mode. The amplitude of the piezoresponse of either VPFM or LPFM is also given by the LiA, in the form of the magnitude, R. Examples of PFM imaging The image shows periodically poled 180° domains in potassium titanyl phosphate (KTP) as imaged by VPFM. In the image piezoresponse amplitude can be seen where dark areas represent the zero amplitude that is expected at domain boundaries where the unit cell is cubic i.e. centrosymmetric and so therefore not ferroelectric. On the left hand side piezoresponse phase can be seen where the measured phase changes to show the out-of-plane components that are pointing out of the screen, white areas, and into the screen, dark areas. The scan area is 20×10 μm2. Below each scan is the relevant cross-section that shows in arbitrary units the PR amplitude and phase. PFM applied to biological materials PFM has been successfully applied to a range of biological materials such as teeth, bone, lung, and single collagen fibrils. It has been hypothesized that the endogenous piezoelectricity in these materials may be relevant in their mechanobiology. For example, using PFM it has been shown that a single collagen fibril as small as 100 nm behaves predominantly as a shear piezoelectric material with an effective piezoelectric constant of ~1 pm/V. Advanced PFM modes Several additions have been made to PFM that substantially increase the flexibility of the technique to probe nanoscale features. Stroboscopic PFM Stroboscopic PFM allows for time resolved imaging of switching in pseudo real-time. A voltage pulse of amplitude much higher than the coercive voltage of the sample but shorter in duration than the characteristic switching time is applied to the sample and subsequently imaged. Further pulses with the same amplitude but longer in time are then applied with regular PFM imaging at the intervals. In this way a series of images showing the switching of the sample can be obtained. Typical pulses are of tens of nanoseconds in duration and are therefore capable of resolving the first nucleation sites of domain reversal and then observing how these sites evolve. Contact resonance PFM Remembering that in PFM an AC bias of a certain frequency causes a deformation of the sample material at that same frequency the system can be considered as a driven harmonic oscillator. As such there exists a resonance as a function of driving frequency. This effect has been exploited in PFM to provide an enhancement in the PR signal, thus allowing for a higher signal-to-noise ratio or similar signal-to-noise ratio at lower driving bias amplitude. Typically this contact resonance is in the kilo- to mega-hertz range which is several times higher in frequency than the first free harmonic in air of the cantilever used. However a drawback is that the contact resonance is dependent not only on the dynamic response of the cantilever but also on the elastic modulus of the sample material immediately in contact with the probe tip and so therefore can change during scanning over different areas. This leads to a change in the measured PR amplitude and so is undesirable. One method of bypassing the inherent disadvantages of contact resonance PFM is to change the driving frequency in order to shadow or track the changes in the frequency of the contact resonance. This feature as developed by Asylum Research called Dual AC™ Resonance Tracking (DART) uses two limit frequencies on either side of the contact resonance peak and so can sense changes in the peak position. It is then possible to adapt the AC bias driving frequency correspondingly in order to maintain the signal boost that results from the contact resonance. Switching spectroscopy (SS) PFM In this technique the area underneath the PFM tip is switched with simultaneous acquisition of a hysteresis loop that can be analysed to obtain information about the sample properties. A series of hysteresis loops are acquired across the sample surface in order to map the switching characteristics as a function of position. In this way an image representing switching properties such as coercive voltage, remnant polarisation, imprint and work of switching amongst others can be displayed in which each pixel displays the desired data from the hysteresis loop acquired at that point. This allows spatial analysis of switching properties to be compared with sample topography. Band Excitation PFM The Band Excitation (BE) technique for scanning probe microscopy uses a precisely determined waveform that contains specific frequencies to excite the cantilever or sample in an atomic force microscope to extract more information, and more reliable information from a sample. There are a myriad of details and complexities associated with implementing the BE technique. There is therefore a need to have a user friendly interface that allows typical microscopists access to this methodology. This software enables users of atomic force microscopes to easily: build complex band-excitation waveforms, set up the microscope scanning conditions, configure the input and output electronics to generate the waveform as a voltage signal and capture the response of the system, perform analysis on the captured response, and display the results of the measurement. Pin Point PFM The conventional PFM operates in contact mode in which the AFM tip is in contact with the sample during the scanning. Contact mode is not suitable for samples with features susceptible to damage or displacement by the tip's drag. In Pin Point PFM, the AFM tip does not contact the surface. The tip is halted at a height at which a predefined force threshold (a threshold at which piezoelectric response is optimal) is reached. At this height, the piezoelectric response is recorded before moving to the next point. In Pin Point mode, tip wear off is reduced significantly. Advantages and disadvantages Advantages High resolution on the nanometer scale Simultaneous acquisition of topography and piezoelectric response Allows manipulation of ferroelectric domains at nanometer scale through ferroelectric nanolithography Non-destructive imaging and fabrication technique Little sample preparation required Disadvantages Scans can be slow, e.g. tens of minutes Tip wear changes surface interaction and can affect contrast Limited to lateral range of AFM i.e. approximately 100×100 μm2 Electromechanical behavior may not be related to piezo/ferro electricity phenomena Surface needs to relatively flat and polished References External links CSI AFM - Nano-Observer PFM mode Asylum Research PFM Application Note Contact Resonance Application Note NT-MDT website with animation JPK Instruments PFM Technical Report Scanning probe microscopy
Piezoresponse force microscopy
[ "Chemistry", "Materials_science" ]
3,140
[ "Nanotechnology", "Scanning probe microscopy", "Microscopy" ]
20,971,576
https://en.wikipedia.org/wiki/Appearance%20energy
Appearance energy (also known as appearance potential) is the minimum energy that must be supplied to a gas phase atom or molecule in order to produce an ion. In mass spectrometry, it is accounted as the voltage to correspond for electron ionization. This is the minimum electron energy that produces an ion. In photoionization, it is the minimum photon energy of a photon that produces some ion signal. For example, the indene bromide ion (IndBr+) only loses bromine at an incident photon energy of 10.2 eV, so the product, indenyl, has an appearance energy of 10.2 eV. See also Ionization energy References Mass spectrometry
Appearance energy
[ "Physics", "Chemistry" ]
142
[ " and optical physics stubs", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", " molecular", "Atomic", "Physical chemistry stubs", "Matter", " and optical physics" ]
19,839,949
https://en.wikipedia.org/wiki/Electromagnetic%20buoyancy
Electromagnetic buoyancy (EMB) is a force that opposes the Lorentz force during electromagnetic phoresis of small particles or droplets in an aqueous medium. It is analogous to the ordinary effect of buoyancy observed when objects float in liquid under the influence of gravity. Though this force is still being researched, it has been clearly observed in experimental procedures. References Electromagnetism Buoyancy
Electromagnetic buoyancy
[ "Physics", "Materials_science" ]
84
[ "Electromagnetism", "Materials science stubs", "Physical phenomena", "Fundamental interactions", "Electromagnetism stubs" ]
19,840,237
https://en.wikipedia.org/wiki/Laplace%20pressure
The Laplace pressure is the pressure difference between the inside and the outside of a curved surface that forms the boundary between two fluid regions. The pressure difference is caused by the surface tension of the interface between liquid and gas, or between two immiscible liquids. The Laplace pressure is determined from the Young–Laplace equation given as where and are the principal radii of curvature and (also denoted as ) is the surface tension. Although signs for these values vary, sign convention usually dictates positive curvature when convex and negative when concave. The Laplace pressure is commonly used to determine the pressure difference in spherical shapes such as bubbles or droplets. In this case, = : For a gas bubble within a liquid, there is only one surface. For a gas bubble with a liquid wall, beyond which is again gas, there are two surfaces, each contributing to the total pressure difference. If the bubble is spherical and the outer radius differs from the inner radius by a small distance, , we find Examples A common example of use is finding the pressure inside an air bubble in pure water, where = 72 mN/m at 25 °C (298 K). The extra pressure inside the bubble is given here for three bubble sizes: A 1 mm bubble has negligible extra pressure. Yet when the diameter is ~3 μm, the bubble has an extra atmosphere inside than outside. When the bubble is only several hundred nanometers, the pressure inside can be several atmospheres. One should bear in mind that the surface tension in the numerator can be much smaller in the presence of surfactants or contaminants. The same calculation can be done for small oil droplets in water, where even in the presence of surfactants and a fairly low interfacial tension = 5–10 mN/m, the pressure inside 100 nm diameter droplets can reach several atmospheres. See also Ostwald ripening Kelvin equation Laplace number Two balloon experiment References Pressure Fluid dynamics Bubbles (physics) Articles containing video clips
Laplace pressure
[ "Physics", "Chemistry", "Engineering" ]
411
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Bubbles (physics)", "Foams", "Chemical engineering", "Pressure", "Piping", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
19,844,077
https://en.wikipedia.org/wiki/Farmall%201026
The Farmall 1026 is a row crop tractor with a hydraulic drive system, or hydro, produced by International Harvester from 1970–1971. Rated at 112 power take off, (PTO) horsepower, the Farmall 1026 was the first 100+ horsepower hydro tractor ever produced. The 1026 was produced as a hydro only model. This was unique to this model, as the other hydro models produced by International Harvester at the time were also built as gear drive versions in addition to the hydro versions, while still maintaining the same model number. The Farmall 656 and 826, for example, were available in hydro and gear drive versions. Production data Farmall 1026 production lasted only 2 years, with only 2,414 of the Farmall version built. The International, or standard version production totaled only 158. These totals are quite low compared to other tractor models built by International Harvester, and today the Farmall 1026 could be considered a rare tractor, and the International 1026 extremely rare. References Tractors International Harvester vehicles Vehicles introduced in 1970
Farmall 1026
[ "Engineering" ]
218
[ "Engineering vehicles", "Tractors" ]
19,845,079
https://en.wikipedia.org/wiki/Fram%20Strait
The Fram Strait is the passage between Greenland and Svalbard, located roughly between 77°N and 81°N latitudes and centered on the prime meridian. The Greenland and Norwegian Seas lie south of Fram Strait, while the Nansen Basin of the Arctic Ocean lies to the north. Fram Strait is noted for being the only deep connection between the Arctic Ocean and the World Oceans. The dominant oceanographic features of the region are the West Spitsbergen Current on the east side of the strait and the East Greenland Current on the west. Description Fram Strait is the northernmost ocean area having ice-free conditions throughout the year. The width of the strait is about 450 km, but because of the wide continental shelves of Greenland and Spitsbergen, the deep portion of Fram Strait is only about 300 km wide. The ocean over the Greenland continental shelf is often covered with ice. Within Fram Strait, the sill connecting the Arctic and Fram Strait is 2545 m deep. The Knipovich Ridge, the northernmost section of the Mid-Atlantic Ridge, extends northward through the strait to connect to the Nansen-Gakkel Ridge of the Arctic Ocean. A rift valley, caused by sea-floor spreading, runs adjacent and parallel to the Knipovich Ridge. The Molloy Deep within Fram Strait is the deepest point of the Arctic. This small basin at 79°8.5′N and 2°47′E has a maximum depth of ±(See also: Litke Deep). The Yermak Plateau, with a mean depth of about 650 m, lies to the northwest of Spitsbergen. Historically, Fram Strait was home to a large population of Bowhead whales, then called the Greenland right whale. By mid-17th century, the Svalbard population of Bowhead whales was reduced to near extinction by excessive whaling (See also: Whaling in Spitsbergen; Smeerenburg). Western Fram Strait may be a wintering ground for this Critically Endangered population. Etymology The use of the name "Fram Strait" for the passage between Spitsbergen and Greenland appears to have come into common use in the oceanographic literature in the 1970s. Fram Strait is named after the Norwegian ship Fram. In an 1893 expedition led by Fridtjof Nansen, the Fram drifted for two years across the Arctic before exiting the Arctic through what is now known as Fram Strait. According to glaciologist and geographer Moira Dunbar, an early adopter of the name, the name "Fram Strait" originated in the Russian scientific literature. While in common use, particularly in the oceanographic scientific literature, the name appears to be unofficial. Oceanography Fram Strait is the only deep-water connection between the World Oceans and the Arctic. Other gateways are the Barents Sea Opening (BSO), the Bering Strait and various small channels in the Canadian Arctic Archipelago. They are all shallower than Fram Strait, leaving Fram Strait the only route by which deep water can be exchanged between the Atlantic and Arctic Oceans. This exchange occurs in both directions, with specific water masses identified with specific regions flowing between the Oceans. Water with characteristics of the deep Canadian and Eurasian Basins of the Arctic are observed leaving the Arctic in the deep western side of Fram Strait, for example. On the eastern side, cold water from the Norwegian Sea is observed entering the Arctic below the West Spitsbergen Current. In recent years the nature and interactions of these water masses have been changing, symptoms of the changes occurring with the ocean's climate. Current systems Warm, salty water is transported northward from the Atlantic by the West Spitsbergen Current in the east of the strait. The West Spitsbergen Current is the northernmost branch of the North Atlantic Current system. This water forms a water mass called the Atlantic water. The sub-surface flow has a strong seasonality with a minimal volume transport in winter. This current transports internal energy into the Arctic Ocean . The northward velocity is maximum in winter, so the heat transport is highest in winter. On the west side of the strait, the East Greenland Current flows southward on the Greenland Shelf. The current carries is relatively cold and fresh water out of the Arctic that corresponds to a water mass called Polar water. The Fram Strait area is located downwind of the Transpolar Drift and therefore covered by multi-year ice in the west of the strait, next to the coast of Greenland. Approximately 90% of sea ice exported from the Arctic is transported by the East Greenland Current. (Sea ice essentially corresponds to fresh water, since its salt content of 4 per mil is much less than the 35 per mil for sea water.) A 2019 estimate states that about "80% of the water exchanged between the Arctic ice cap and the world’s oceans passes through the Fram Strait." Long-time observations The Alfred Wegener Institute for Polar and Marine Research (AWI) and the Norwegian Polar Institute (NPI) have maintained long term monitoring measurements in Fram Strait to obtain volume- and energy-budgets through this choke point. The observations also serve to assess the development of the Arctic Ocean as a sink for terrestrial organic carbon. The AWI=NPI observing array consists of a line of up to 16 moorings across Fram Strait. The mooring line has been maintained since 1997 with a spacing of roughly 25 km. At up to five different depths, the moored array measures the water velocity, temperature, and salinity of the water column. Importance for climate Computer simulations suggest that 60 to 70% of the fluctuation of the sea ice flowing through the Fram Strait is correlated with a 6–7 year fluctuation in which the Icelandic Low Pressure system extends eastward into the Barents Sea. The amount of sea ice passing through the Fram Strait varies from year to year and affects the global climate through its influence on thermohaline circulation. The warming in the Fram Strait region has likely amplified Arctic shrinkage, and serves as a positive feedback mechanism for transporting more internal energy to the Arctic Ocean. In the past century, the sea surface temperature at Fram Strait has on average warmed roughly 1.9 °C (3.5 °F), and is 1.4 °C (2.5 °F) warmer than during the Medieval Warm Period. References Straits of the Arctic Ocean Straits of Norway Physical oceanography Currents of the Atlantic Ocean International straits Geography of North America Geography of Europe
Fram Strait
[ "Physics" ]
1,321
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
19,849,524
https://en.wikipedia.org/wiki/Bis%28trimethylsilyl%29acetamide
Bis(trimethylsilyl)acetamide (BSA) is an organosilicon compound with the formula (Me = CH3). It is a colorless liquid that is soluble in diverse organic solvents, but reacts rapidly with moisture and solvents containing OH and NH groups. It is used in analytical chemistry to increase the volatility of analytes, e.g., for gas chromatography. It is also used to introduce the trimethylsilyl protecting group in organic synthesis. A related reagent is N,O-bis(trimethylsilyl)trifluoroacetamide (BSTFA). Synthesis and reactions BSA is prepared by treating acetamide with trimethylsilyl chloride in the presence of a base (Me = CH3, Et = C2H5): The reaction of BSA with alcohols gives the corresponding trimethylsilyl ether, together with acetamide as a byproduct (Me = CH3): References Reagents for organic chemistry Trimethylsilyl compounds
Bis(trimethylsilyl)acetamide
[ "Chemistry" ]
228
[ "Functional groups", "Trimethylsilyl compounds", "Reagents for organic chemistry" ]
19,849,627
https://en.wikipedia.org/wiki/Shape%20waves
Shape waves are excitations propagating along Josephson vortices or fluxons. In the case of two-dimensional Josephson junctions (thick long Josephson junctions with an extra dimension) described by the 2D sine-Gordon equation, shape waves are distortions of a Josephson vortex line of an arbitrary profile. Shape waves have remarkable properties exhibiting Lorentz contraction and time dilation similar to that in special relativity. Position of the shape wave excitation on a Josephson vortex acts like a “minute-hand” showing the time in the rest-frame associated with the vortex. At some conditions, a moving vortex with the shape excitation can have less energy than the same vortex without it. References Waves
Shape waves
[ "Physics" ]
148
[ "Waves", "Physical phenomena", "Motion (physics)" ]
19,850,755
https://en.wikipedia.org/wiki/United%20States%20Army%20Gas%20School
The United States Army Gas School was established during World War I at Camp A.A. Humphreys in Virginia. The first courses began in May 1918 and the school was designed to instruct commissioned and noncommissioned officers in chemical warfare. History In late October 1917 the War College was badly underprepared for large-scale chemical warfare in World War I. With many U.S. soldiers operating in a chemical environment with no knowledge of chemical warfare the War College requested a British gas officers and NCOs, the requests were granted. The British experts arrived and were directed and coordinated by Major S.J.M. Auld. Auld was tasked with composing a "working textbook on gas" for the U.S. Army. Among Auld's recommendations was an idea the General Staff had already been considering, the establishment of a central Army Gas School. As a result of Auld's suggestion the Army Gas School was established at Camp A.A. Humphreys, Virginia. The school, beginning in May 1918, offered two initial courses. One course was a four-day class on general information about gas warfare and was offered to commissioned and noncommissioned officers. The second course was a 12-day affair for Chief Gas Officers which went into greater detail about chemical warfare. Upon its establishment, Ross A. Baker was given charge of training for Chief Gas Officers at the Army Gas School. Baker was the Chief Gas Officer at Camp Pike in Arkansas and a professor of chemistry at the University of Minnesota before taking the post at Camp Humphreys. Later in the month of October 1917 the entire Army Gas School operation was transferred to Camp Kendrick. See also Defence CBRN Centre United States Army CBRN School United States Army Chemical Corps United States chemical weapons program References Further reading "Army Starts a Gas School", The New York Times, 30 May 1918, accessed 19 October 2008. United States Army schools Chemical warfare facilities 1918 establishments in Virginia
United States Army Gas School
[ "Chemistry" ]
395
[ "Chemical warfare facilities" ]
19,851,252
https://en.wikipedia.org/wiki/Redlich%E2%80%93Kwong%20equation%20of%20state
In physics and thermodynamics, the Redlich–Kwong equation of state is an empirical, algebraic equation that relates temperature, pressure, and volume of gases. It is generally more accurate than the van der Waals equation and the ideal gas equation at temperatures above the critical temperature. It was formulated by Otto Redlich and Joseph Neng Shun Kwong in 1949. It showed that a two-parameter, cubic equation of state could well reflect reality in many situations, standing alongside the much more complicated Beattie–Bridgeman model and Benedict–Webb–Rubin equation that were used at the time. Although it was initially developed for gases, the Redlich–Kwong equation has been considered the most modified equation of state since those modifications have been aimed to generalize the predictive results obtained from it. Although this equation is not currently employed in practical applications, modifications derived from this mathematical model like the Soave Redlich-Kwong (SWK), and Peng Robinson have been improved and currently used in simulation and research of vapor–liquid equilibria. Equation The Redlich–Kwong equation is formulated as: where: p is the gas pressure R is the gas constant, T is temperature, Vm is the molar volume (V/n), a is a constant that corrects for attractive potential of molecules, and b is a constant that corrects for volume. The constants are different depending on which gas is being analyzed. The constants can be calculated from the critical point data of the gas: where: Tc is the temperature at the critical point, and Pc is the pressure at the critical point. The Redlich–Kwong equation can also be represented as an equation for the compressibility factor of gas, as a function of temperature and pressure: where: Or more simply: This equation only implicitly gives Z as a function of pressure and temperature, but is easily solved numerically, originally by graphical interpolation, and now more easily by computer. Moreover, analytic solutions to cubic functions have been known for centuries and are even faster for computers. The Redlich-Kwong equation of state may also be expressed as a cubic function of the molar volume. For all Redlich–Kwong gases: where: Zc is the compressibility factor at the critical point Using the equation of state can be written in the reduced form: And since it follows: with From the Redlich–Kwong equation, the fugacity coefficient of a gas can be estimated: Critical constants It is possible to express the critical constants Tc and Pc as functions of a and b by reversing the following system of 2 equations a(Tc, Pc) and b(Tc, Pc) with 2 variables Tc, Pc: Because of the definition of compressibility factor at critical condition, it is possible to reverse it to find the critical molar volume Vm,c, by knowing previous found Pc, Tc and Zc=1/3. Multiple components The Redlich–Kwong equation was developed with an intent to also be applicable to mixtures of gases. In a mixture, the b term, representing the volume of the molecules, is an average of the b values of the components, weighted by the mole fractions: or where: xi is the mole fraction of the ith component of the mixture, bij is the covolume parameter of the i-j pair in the mixture, and Bi is the B value of the ith component of the mixture The cross-terms of bij (i.e. terms for which ), are commonly computed as , where is an often empirically fitted interaction parameter accounting for asymmetry in the cross interactions. The constant representing the attractive forces, a, is not linear with respect to mole fraction, but rather depends on the square of the mole fractions. That is: where: ai j is the attractive term between a molecule of species i and species j, xi is the mole fraction of the ith component of the mixture, and xj is the mole fraction of the jth component of the mixture. It is generally assumed that the attractive cross terms represent the geometric average of the individual a terms, adjusted using an interaction parameter , that is: , Where the interaction parameter is an often empirically fitted parameter accounting for asymmetry in the molecular cross-interactions. In this case, the following equation for the attractive term is furnished: where Ai is the A term for the i'''th component of the mixture. These manners of creating a and b parameters for a mixture from the parameters of the pure fluids are commonly known as the van der Waals one-fluid mixing and combining rules. History The Van der Waals equation, formulated in 1873 by Johannes Diderik van der Waals, is generally regarded as the first somewhat realistic equation of state (beyond the ideal gas law): However, its modeling of real behavior is not sufficient for many applications, and by 1949, had fallen out of favor, with the Beattie–Bridgeman and Benedict–Webb–Rubin equations of state being used preferentially, both of which contain more parameters than the Van der Waals equation. The Redlich–Kwong equation was developed by Redlich and Kwong while they were both working for the Shell Development Company at Emeryville, California. Kwong had begun working at Shell in 1944, where he met Otto Redlich when he joined the group in 1945. The equation arose out of their work at Shell - they wanted an easy, algebraic way to relate the pressures, volumes, and temperatures of the gasses they were working with - mostly non-polar and slightly polar hydrocarbons (the Redlich–Kwong equation is less accurate for hydrogen-bonding gases). It was presented jointly in Portland, Oregon at the Symposium on Thermodynamics and Molecular Structure of Solutions in 1948, as part of the 14th Meeting of the American Chemical Society. The success of the Redlich–Kwong equation in modeling many real gases accurately demonstrate that a cubic, two-parameter equation of state can give adequate results, if it is properly constructed. After they demonstrated the viability of such equations, many others created equations of similar form to try to improve on the results of Redlich and Kwong. Derivation The equation is essentially empirical – the derivation is neither direct nor rigorous. The Redlich–Kwong equation is very similar to the Van der Waals equation, with only a slight modification being made to the attractive term, giving that term a temperature dependence. At high pressures, the volume of all gases approaches some finite volume, largely independent of temperature, that is related to the size of the gas molecules. This volume is reflected in the b in the equation. It is empirically true that this volume is about 0.26Vc (where Vc is the volume at the critical point). This approximation is quite good for many small, non-polar compounds – the value ranges between about 0.24Vc and 0.28Vc. In order for the equation to provide a good approximation of volume at high pressures, it had to be constructed such that The first term in the equation represents this high-pressure behavior. The second term corrects for the attractive force of the molecules to each other. The functional form of a with respect to the critical temperature and pressure is empirically chosen to give the best fit at moderate pressures for most relatively non-polar gasses. In reality The values of a and b are completely determined by the equation's shape and cannot be empirically chosen. Requiring it to hold at its critical point , enforcing the thermodynamic criteria for a critical point, and without loss of generality defining and yields 3 constraints, . Simultaneously solving these while requiring b' and Zc to be positive yields only one solution: . Modification The Redlich–Kwong equation was designed largely to predict the properties of small, non-polar molecules in the vapor phase, which it generally does well. However, it has been subject to various attempts to refine and improve it. In 1975, Redlich himself published an equation of state adding a third parameter, in order to better model the behavior of both long-chained molecules, as well as more polar molecules. His 1975 equation was not so much a modification to the original equation as a re-inventing of a new equation of state, and was also formulated so as to take advantage of computer calculation, which was not available at the time the original equation was published. Many others have offered competing equations of state, either modifications to the original equation, or equations quite different in form. It was recognized by the mid 1960s that to significantly improve the equation, the parameters, especially a, would need to become temperature dependent. As early as 1966, Barner noted that the Redlich–Kwong equation worked best for molecules with an acentric factor (ω) close to zero. He therefore proposed a modification to the attractive term: where α is the attractive term in the original Redlich–Kwong equation γ is a parameter related to ω, with γ = 0 for ω = 0 It soon became desirable to obtain an equation that would also model well the Vapor–liquid equilibrium (VLE) properties of fluids, in addition to the vapor-phase properties. Perhaps the best known application of the Redlich–Kwong equation was in calculating gas fugacities of hydrocarbon mixtures, which it does well, that was then used in the VLE model developed by Chao and Seader in 1961. However, in order for the Redlich–Kwong equation to stand on its own in modeling vapor–liquid equilibria, more substantial modifications needed to be made. The most successful of these modifications is the Soave modification to the equation, proposed in 1972. Soave's modification involved replacing the T1/2 power found in the denominator attractive term of the original equation with a more complicated temperature-dependent expression. He presented the equation as follows: whereTr is the reduced temperature of the compound, andω is the acentric factor The Peng–Robinson equation of state further modified the Redlich–Kwong equation by modifying the attractive term, giving the parameters a, b, and α are slightly modified, with The Peng–Robinson equation typically gives similar VLE equilibria properties as the Soave modification, but often gives better estimations of the liquid phase density. Several modifications have been made that attempt to more accurately represent the first term, related to the molecular size. The first significant modification of the repulsive term beyond the Van der Waals equation's (where Phs represents a hard spheres equation of state term.) was developed in 1963 by Thiele: where , and This expression was improved by Carnahan and Starling to give The Carnahan-Starling hard-sphere equation of state has term been used extensively in developing other equations of state, and tends to give very good approximations for the repulsive term. Beyond improved two-parameter equations of state, a number of three parameter equations have been developed, often with the third parameter depending on either Zc, the compressibility factor at the critical point, or ω, the acentric factor. Schmidt and Wenzel proposed an equation of state with an attractive term that incorporates the acentric factor: This equation reduces to the original Redlich–Kwong equation in the case when ω = 0, and to the Peng–Robinson equation when ω'' = 1/3. See also Gas laws Ideal gas Inversion temperature Iteration Maxwell construction Real gas Theorem of corresponding states Van der Waals equation References Equations of state Gas laws Engineering thermodynamics
Redlich–Kwong equation of state
[ "Physics", "Chemistry", "Engineering" ]
2,378
[ "Equations of physics", "Engineering thermodynamics", "Statistical mechanics", "Thermodynamics", "Gas laws", "Mechanical engineering", "Equations of state" ]
19,852,914
https://en.wikipedia.org/wiki/Manipulability%20ellipsoid
In robot kinematics, the manipulability ellipsoid represents the manipulability of a robotic system in a graphical form. Here, the manipulability of a robot arm refers to its ability to alter the position of the end effector based on the joint configuration. A higher manipulability measure signifies a broader range of potential movements in that specific configuration. When the robot is in a singular configuration the manipulability measure diminishes to zero. Definition The manipulability ellipsoid is defined as the set where q is the joint configuration of the robot and J is the robot Jacobian relating the end-effector velocity with the joint rates. Geometric Interpretation A geometric interpretation of the manipulability ellipsoid is that it includes all possible end-effector velocities normalized for a unit input at a given robot configuration. The axis of the ellipsoid can be computed by using the singular value decomposition of the robot Jacobian. References External links Interactive demonstration of manipulability ellipsoid of a robot arm Robot control Geometry
Manipulability ellipsoid
[ "Mathematics", "Engineering" ]
230
[ "Robot control", "Geometry", "Robotics engineering" ]
8,364,143
https://en.wikipedia.org/wiki/Venturi%20scrubber
A venturi scrubber is designed to effectively use the energy from a high-velocity inlet gas stream to atomize the liquid being used to scrub the gas stream. This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers. Venturis can be used to collect both particulate and gaseous pollutants, but although the liquid surface area provided is quite large they are more effective in removing particles since particles can be trapped by contact, but gases must be trapped by absorption during the relatively short exposure time. Venturi devices have also been used for over 100 years to measure fluid flow (Venturi tubes derived their name from Giovanni Battista Venturi, an Italian physicist). In the late 1940s, H.F. Johnstone, William Jones, and other researchers found that they could effectively use the venturi configuration to remove particles from gas streams. Figure 1 illustrates the classic venturi configuration. An ejector or jet venturi scrubber is an industrial pollution control device, usually installed on the exhaust flue gas stacks of large furnaces, but may also be used on any number of other air exhaust systems. They differ from other venturi scrubbers energy is derived from the high-pressure spray of liquid from a nozzle rather than the flow of process gas, allowing the scrubber to also act as a vacuum ejector and draw process gas through the device without external assistance. Operation A venturi scrubber consists of three sections: a converging section, a throat section, and a diverging section. The inlet gas stream enters the converging section and, as the area decreases, gas velocity increases. Liquid is introduced either at the throat or at the entrance to the converging section. The inlet gas, forced to move at extremely high velocities in the small throat section, turbulently mixes with the liquid, producing an enormous number of very tiny droplets. Particle and gas removal occur in the diverging section as the inlet gas stream mixes with the fog of tiny liquid droplets. The inlet stream then exits through the diverging section, where it is forced to slow down. If liquid is introduced above the converging section and coats the walls up to the throat, then the venturi is described as having a "wet wall" or "wetted throat" as seen in Figure 2. This method allows particulates in the stream that may be prone to caking onto surfaces to be washed away and reduces the mechanical abrasion of particles hitting the throat at high speed. It is very effective for handling hot, dry inlet gases that contain dust, or particles which are abrasive or corrosive, such as kiln or furnace gases. Wetting of the throat can be achieved with a spray directed at the walls or with a weir encircling the converging section which water flows over. This method can be used only at liquid injection source as the high velocity gas will shear droplets from the walls. Liquid can also be introduced by spray nozzles directly into the gas stream and for low gas flow velocities this may provide more efficient operation, either or both methods may be employed depending on the application. Simple venturis have fixed throat areas and so will only operate efficiently over a certain range of flow rates. Adjustable-throat venturis allow efficiency to be maintained over a much larger range of flows by changing the size of the throat in accordance with the gas flow rate. Certain types of orifices (throat areas) that create more turbulence than a true venturi were found to be equally efficient for a given unit of energy consumed and the results of these findings led to the development of the annular-orifice, or adjustable-throat, venturi scrubber (Figure 5). The size of the throat area is varied by moving a plunger, or adjustable disk, up or down in the throat, thereby decreasing or increasing the annular opening. Gas flows through the annular opening and atomizes liquid that is sprayed onto the plunger or swirled in from the top. Wetted-throat venturis with round throats (Figures 2 and 3) can handle inlet flows as large as 88,000 m3/h (40,000 cfm). At inlet flow rates greater than this, achieving uniform liquid distribution is difficult, unless additional weirs or baffles are used. To handle large inlet flows, scrubbers designed with long, narrow, rectangular throats (Figure 4) have been used. The rectangular-throat venturi is often built to be adjustable by introducing moving plates or flaps into the throat, as shown in Figure 6. A water-wash spray is used to continually wash collected material from the plate. Another modification can be seen in the venturi-rod or rod deck scrubber. By placing a number of pipes parallel to each other, a series of longitudinal venturi openings can be created as shown in Figure 7. The area between adjacent rods is a small venturi throat. Water sprays help prevent solids buildup. The principal atomization of the liquid occurs at the rods, where the high-velocity gas moving through spacings creates the small droplets necessary for fine particle collection. This method can produce very high water droplet densities in the gas stream due to a very high throat perimeter compared to other types. These rods must be made of abrasion-resistant material due to the high velocities present. All venturi scrubbers require an entrainment separator because the high velocity of gas through the scrubber will have a tendency to entrain the droplets with the outlet clean gas stream. Cyclonic, mesh-pad, and blade separators are all used to remove liquid droplets from the flue gas and return the liquid to the scrubber water. Ejector venturi scrubber Like a spray tower an ejector venturi scrubber uses a preformed spray. However, in an ejector venturi scrubber only a single nozzle is used instead of many nozzles. This nozzle operates at higher pressures and higher injection rates than those in most spray chambers. The high-pressure spray nozzle (up to 689 kPa or 100 psig) is aimed at the throat section of a venturi constriction. The ejector venturi is unique among available scrubbing systems since it can move the process gas without the aid of a fan or blower. The liquid spray coming from the nozzle creates a partial vacuum in the side duct of the scrubber. The partial vacuum is due to the Bernoulli effect, and is similar to water aspirators used in chemistry labs. This partial vacuum can be used to move the process gas through the venturi as well as through the facility's process system. In the case of easily-clogging material, explosive gasses, or extremely corrosive atmospheres, the elimination of a fan in the system can avoid many potential problems. The energy for the formation of scrubbing droplets comes from the injected liquid. The high pressure sprays passing through the venturi throat form numerous fine liquid droplets that provide turbulent mixing between the gas and liquid phases. Very high liquid-injection rates are used to provide the gas-moving capability and higher collection efficiencies. As with other types of venturis, a means of separating entrained liquid from the gas stream must be installed. Entrainment separators are commonly used to remove remaining small droplets. Particle collection The atomized liquid provides an enormous number of tiny droplets for the dust particles to impact on. These liquid droplets incorporating the particles must be removed from the scrubber outlet stream, generally by cyclonic separators. Particle removal efficiency increases with increasing pressure drop because of increased turbulence due to high gas velocity in the throat. Venturis can be operated with pressure drops ranging from 12millibar to 250millibar. Most venturis normally operate with pressure drops in the range of 50 to 150 cm (20 to 60 in) of water. At these pressure drops, the gas velocity in the throat section is usually between 30 and 120 m/s (100 to 400 ft/s), or approximately 270 mph at the high end. These high pressure drops result in high operating costs. The liquid-injection rate, or liquid-to-gas ratio (L/G), also affects particle collection. The proper amount of liquid must be injected to provide adequate liquid coverage over the throat area and make up for any evaporation losses. If there is insufficient liquid, then there will not be enough liquid targets to provide the required capture efficiency. Most venturi systems operate with an L/G ratio of 0.4 to 1.3 L/m3 (3 to 10 gal/1000 ft3). L/G ratios less than 0.4 L/m3 (3 gal/1000 ft3) are usually not sufficient to cover the throat, and adding more than 1.3 L/m3 (10 gal/1000 ft3) does not usually significantly improve particle collection efficiency. Ejector venturi scrubber Ejector venturis are effective in removing particles larger than 1.0 μm in diameter. These scrubbers are not used on submicrometer-sized particles unless the particles are condensable. Particle collection occurs primarily by impaction as the exhaust gas (from the process) passes through the spray. The turbulence that occurs in the throat area also causes the particles to contact the wet droplets and be collected. Particle collection efficiency increases with an increase in nozzle pressure and/or an increase in the liquid-to-gas ratio. Increases in either of these two operating parameters will also result in an increase in pressure drop for a given system. Therefore, an increase in pressure drop also increases particle collection efficiency. Ejector venturis operate at higher L/G ratios than most other particulate scrubbers (i.e., 7 to 13 L/m3 compared to 0.4-2.7 L/m3 for most other designs), and often also require higher liquid pressures, especially if being used to drive the process gas. Gas collection Venturi scrubbers can be used for removing gaseous pollutants; however, they are not used when removal of gaseous pollutants is the only concern. The high inlet gas velocities in a venturi scrubber result in a very short contact time between the liquid and gas phases. This short contact time limits gas absorption. However, because venturis have a relatively open design compared to other scrubbers, they are still very useful for simultaneous gaseous and particulate pollutant removal, especially when: Scaling could be a problem A high concentration of dust is in the inlet stream The dust is sticky or has a tendency to plug openings The gaseous contaminant is very soluble or chemically reactive with the liquid To maximize the absorption of gases, venturis are designed to operate at a different set of conditions from those used to collect particles. The gas velocities are lower and the liquid-to-gas ratios are higher for absorption. For a given venturi design, if the gas velocity is decreased, then the pressure drop (resistance to flow) will also decrease and vice versa. Therefore, by reducing pressure drop, the gas velocity is decreased and the corresponding residence time is increased. Liquid-to-gas ratios for these gas absorption applications are approximately 2.7 to 5.3 L/m3 (20 to 40 gal/1000 ft3). The reduction in gas velocity allows for a longer contact time between phases and better absorption. Increasing the liquid-to-gas ratio will increase the potential solubility of the pollutant in the liquid. This is why the ejector venturi scrubber is often used instead for this purpose, although other factors may still result in a typical venturi scrubber being chosen. Though capable of some incidental control of volatile organic compounds (VOC), generally venturi scrubbers are limited to control PM (particulate matter) and high solubility or reactive gases (EPA, 1992; EPA, 1996). Ejector venturi scrubber Ejector venturis have a short gas-liquid contact time because the exhaust gas velocities through the vessel are very high. This short contact time limits the absorption efficiency of the system. Although ejector venturis are not used primarily for gas removal, they can be effective if the gas is very soluble or if a very reactive scrubbing reagent is used. In these instances, removal efficiencies of as high as 95% can be achieved. Maintenance considerations The primary maintenance problem for venturi scrubbers is wear, or abrasion, of the scrubber shell because of high gas velocities. Gas velocities in the throat can reach speeds of 430 km/h (270 mph). Particles and liquid droplets traveling at these speeds can rapidly erode the scrubber shell. Abrasion can be reduced by lining the throat with silicon carbide brick or fitting it with a replaceable liner. Abrasion can also occur downstream of the throat section. To reduce abrasion here, the elbow at the bottom of the scrubber (leading into the separator) can be flooded (i.e. filled with a pool of scrubbing liquid). Particles and droplets impact on the pool of liquid, reducing wear on the scrubber shell. A common technique to help reduce abrasion is to use a precleaner (i.e., quench sprays or cyclone) to remove the larger and more damaging particles. This also has the added benefit reducing the particle load carried by the liquid. The method of liquid injection at the venturi throat can also cause problems. Spray nozzles are used for liquid distribution because they are more efficient (have a more effective spray pattern) for liquid injection than weirs. However, spray nozzles can easily plug when liquid is recirculated. Automatic or manual reamers can be used to correct this problem. However, when heavy liquid slurries (either viscous or particle-loaded) are recirculated, open-weir injection is often necessary. Ejector venturi scrubber Ejector venturis are subject to abrasion problems in the high-velocity areas - nozzle and throat. Both must be constructed of wear-resistant materials because of the high liquid injection rates and nozzle pressures. Maintaining the pump that recirculates liquid is also very important. In addition, the high gas velocities necessitate the use of entrainment separators to prevent excessive liquid carryover. The separators should be easily accessible or removable so that they can be cleaned if plugging occurs. Summary Venturi scrubbers can have the highest particle collection efficiencies (especially for very small particles) of any wet scrubbing system. They are the most widely used scrubbers because their open construction enables them to remove most particles without plugging or scalding. Venturis can also be used to absorb pollutant gases; however, they are not as efficient for this as are packed or plate towers. Venturi scrubbers have been designed to collect particles at very high collection efficiencies, sometimes exceeding 99%. The ability of venturis to handle large inlet volumes at high temperatures makes them very attractive to many industries; consequently, they are used to reduce particulate emissions in a number of industrial applications. This ability is particularly desirable for cement kiln emission reduction and for control of emissions from basic oxygen furnaces in the steel industry, where the inlet gas enters the scrubber at temperatures greater than 350 °C (660 °F). Venturis are also used to control fly ash and sulfur dioxide emissions from industrial and utility boilers. The operating characteristics of venturi scrubbers are listed in Table 1. Ejector venturi scrubber Because of their open design and the fact that they do not require a fan, ejector venturis are capable of handling a wide range of corrosive and/or sticky particles. However, they are not very effective in removing submicrometer particles. They have an advantage in being able to handle small, medium and large exhaust flows. They can be used singly or in multiple stages of two or more in series, depending on the specific application. Multiple-stage systems have been used where extremely high collection efficiency of particles or gaseous pollutants was necessary. Multiple-stage systems provide increased gas-liquid contact time, thus increasing absorption efficiency. See also Ejector venturi scrubber Venturi effect References Bibliography Anderson 2000 Company. Venturi scrubbing equipment. Engineering Manual with Operating and Maintenance Instructions. Atlanta: Anderson Company. Bethea, R. M. 1978. Air Pollution Control Technology. New York: Van Nostrand Reinhold. Buonicore, A. J. 1982. Wet scrubbers. In L. Theodore and A. J. Buonicore (Eds.), Air Pollution Control Equipment, Design, Selection, Operation and Maintenance. Englewood Cliffs: Prentice-Hall. Calvert, S. 1977. How to choose a particulate scrubber. Chemical Engineering. 84:133-140. Johnstone, H. F., and M. H. Roberts. 1949. Deposition of aerosol particles from moving gas streams. Industrial and Engineering Chemistry. 41:2417-2423. Kelly, J. W. 1978, December 4. Maintaining venturi-tray scrubbers. Chemical Engineering. McIlvaine Company. 1974. The Wet Scrubber Handbook. Northbrook, IL: McIlvaine Company. Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency. Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency. External links Air Pollution Training Courses (from website of U.S. EPA's Air Pollution Training Institute) Pollution control technologies Air pollution control systems Wet scrubbers Gas-phase contacting scrubbers
Venturi scrubber
[ "Chemistry", "Engineering" ]
3,719
[ "Scrubbers", "Wet scrubbers", "Pollution control technologies", "Environmental engineering" ]
8,366,618
https://en.wikipedia.org/wiki/Radu%20B%C4%83lescu
Radu Bălescu (Bucharest, 18 July 1932 – 1 June 2006, Bucharest) was a Romanian and Belgian (since 1959) scientist and professor at the Statistical and Plasma Physics group of the Université Libre de Bruxelles (ULB). He studied at the Titu Maiorescu High School in Bucharest (1943–1948) and the Athénée Royal d'Ixelles (1948–1950). At the ULB (1950–1958) he studied chemistry and obtained a PhD in 1958. He started his academic career in 1957 at the ULB as an assistant (with Prof. Ilya Prigogine) at the Service de Physique Théorique et Mathématique. He became a professor at the ULB in 1964. He worked on the statistical physics of charged particles (Bălescu-Lenard collision operator) and on the theory of transport of magnetically confined plasmas. Radu Balescu was involved in the European fusion programme for more than 30 years as a scientist and as the head of research unit of the ULB group in the Euratom-Belgian state Association. In 1970 he was awarded the Francqui Prize on Exact Sciences. In 2000 he received the Hannes Alfvén Prize. See also EURATOM References Journal External links Radu Bălescu Belgian physicists Scientists from Bucharest Romanian emigrants to Belgium 1932 births 2006 deaths Free University of Brussels (1834–1969) alumni Plasma physicists Ion Luca Caragiale National College (Bucharest) alumni
Radu Bălescu
[ "Physics" ]
304
[ "Plasma physicists", "Plasma physics" ]
6,385,832
https://en.wikipedia.org/wiki/Local%20convex%20hull
Local convex hull (LoCoH) is a method for estimating size of the home range of an animal or a group of animals (e.g. a pack of wolves, a pride of lions, or herd of buffaloes), and for constructing a utilization distribution. The latter is a probability distribution that represents the probabilities of finding an animal within a given area of its home range at any point in time; or, more generally, at points in time for which the utilization distribution has been constructed. In particular, different utilization distributions can be constructed from data pertaining to particular periods of a diurnal or seasonal cycle. Utilization distributions are constructed from data providing the location of an individual or several individuals in space at different points in time by associating a local distribution function with each point and then summing and normalizing these local distribution functions to obtain a distribution function that pertains to the data as a whole. If the local distribution function is a parametric distribution, such as a symmetric bivariate normal distribution then the method is referred to as a kernel method, but more correctly should be designated as a parametric kernel method. On the other hand, if the local kernel element associated with each point is a local convex polygon constructed from the point and its k-1 nearest neighbors, then the method is nonparametric and referred to as a k-LoCoH or fixed point LoCoH method. This is in contrast to r-LoCoH (fixed radius) and a-LoCoH (adaptive radius) methods. In the case of LoCoH utilization distribution constructions, the home range can be taken as the outer boundary of the distribution (i.e. the 100th percentile). In the case of utilization distributions constructed from unbounded kernel elements, such as bivariate normal distributions, the utilization distribution is itself unbounded. In this case the most often used convention is to regard the 95th percentile of the utilization distribution as the boundary of the home range. To construct a k-LoCoH utilization distribution: Locate the k − 1 nearest neighbors for each point in the dataset. Construct a convex hull for each set of nearest neighbors and the original data point. Merge these hulls together from smallest to largest. Divide the merged hulls into isopleths where the 10% isopleth contains 10% of the original data points, the 100% isopleth contains all the points, etc. In this sense, LoCoH methods are a generalization of the home range estimator method based on constructing the minimum convex polygon (MCP) associated with the data. The LoCoH method has a number of advantages over parametric kernel methods. In particular: As more data are added, the estimates of the home range become more accurate than for bivariate normal kernel constructions. LoCoH handles 'sharp' features such as lakes and fences much better than parametric kernel constructions. As mentioned above, the home range is a finite region without having to resort to an ad-hoc choice, such as the 95th percentile to obtain bounded region. LoCoH has a number of implementations including a now-defunct LoCoH Web Application. LoCoH was formerly known as k-NNCH, for k-nearest neighbor convex hulls. It has recently been shown that the a-LoCoH is the best of the three LoCoH methods mentioned above (see Getz et al. in the references below). T-LoCoH T-LoCoH (time local convex hull) is an enhanced version of LoCoH which incorporates time into the home range construction. Time is incorporated into the algorithm via an alternative measure of 'distance', called time scaled distance (TSD), which combines the spatial distance and temporal distance between any two points. This presumes that each point has a time stamp associated with it, as with GPS data. T-LoCoH uses TSD rather than Euclidean distance to identify each point's nearest neighbors, resulting in hulls that are localized in both space and time. Hulls are then sorted and progressively unioned into isopleths. Like LoCoH, UDs created by T-LoCoH generally do a good job modeling sharp edges in habitat such as water bodies; in addition T-LoCoH isopleths can delineate temporal partitions of space use. T-LoCoH also offers additional sorting options for hulls, allowing it to generate isopleths that differentiate internal space by both intensity of use (the conventional UD) and a variety of behavioral proxies, including directionality and time use metrics. Time scaled distance The TSD for any two locations i and j separated in time by is given by Conceptually, TSD transforms the period of time between two observations into spatial units by estimating how far the individual could have traveled during the time period if it had been moving at its maximum observed speed. This theoretical movement distance is then mapped onto a third axis of space, and distance calculated using standard Euclidean distance equations. The TSD equation also features a scaling parameter s which controls the degree to which the temporal difference scales to spatial units. When s=0, the temporal distance drops out and TSD is equivalent to Euclidean distance (thus T-LoCoH is backward compatible with LoCoH). As s increases, the temporal distance becomes more and more influential, eventually swamping out the distance in space. The TSD metric is not based on a mechanistic or diffusion model of movement, but merely serves to generate hulls that are local in space and/or time. References Spatial analysis Convex hulls
Local convex hull
[ "Physics" ]
1,134
[ "Spacetime", "Space", "Spatial analysis" ]
6,386,814
https://en.wikipedia.org/wiki/Outline%20of%20architecture
The following outline is an overview and topical guide to architecture: Architecture – the process and the product of designing and constructing buildings. Architectural works with a certain indefinable combination of design quality and external circumstances may become cultural symbols and / or be considered works of art. What type of thing is architecture? Architecture can be described as all of the following: Academic discipline – focused study in one academic field or profession. A discipline incorporates expertise, people, projects, communities, challenges, studies, inquiry, and research areas that are strongly associated with the given discipline. Buildings – buildings and similar structures, the product of architecture, are referred to as architecture. One of the arts – as an art form, architecture is an outlet of human expression, that is usually influenced by culture and which in turn helps to change culture. Architecture is a physical manifestation of the internal human creative impulse. Fine art – in Western European academic traditions, fine art is art developed primarily for aesthetics, distinguishing it from applied art that also has to serve some practical function. The word "fine" here does not so much denote the quality of the artwork in question, but the purity of the discipline according to traditional Western European canons. Science – systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. A science is a branch of science, or a discipline of science. It is a way of pursuing knowledge, not only the knowledge itself. Applied science – branch of science that applies existing scientific knowledge to develop more practical applications, such as technology or inventions. Definitions of architecture Architecture is variously defined in conflicting ways, highlighting the difficulty of describing the scope of the subject precisely: A general term to describe buildings and other physical structures – although not all buildings are generally considered to be architecture, and infrastructure (bridges, roads etc.) is civil engineering, not architecture. The art and science, or the action and process, of designing and constructing buildings. The design activity of the architect, the profession of designing buildings. A building designed by an architect, the end product of architectural design. A building whose design transcends mere function, a unifying or coherent form or structure. The expression of thought in building. A group or body of buildings in a particular style. A particular style or way of designing buildings. Some key quotations on the subject of architecture: Vitruvius: defined the essential qualities of architecture as "firmness, commodity, and delight". Johann Wolfgang von Goethe: "I call architecture frozen music". Walter Gropius: "Architecture begins where engineering ends". Le Corbusier: "A house is a machine for living in". Louis Sullivan: "... form ever follows function. This is the law", usually quoted as the architectural mantra "form follows function". Mies van der Rohe: "Less is more". Robert Venturi: "Less is a bore". Roles in architecture Professionals involved in planning, designing, and constructing buildings include: Architect – a person trained in the planning, design and supervision of building construction. Architectural intern – a person gaining practical experience while studying to qualify as an architect. Council architect – an architect employed by a local authority. Landscape architect – a person who develops land for human use and enjoyment through effective placement of structures, vehicular and pedestrian ways, and plantings. Project architect – a person who is responsible for overseeing the architectural aspects of the development of the design, production of the construction documents and specifications. State architect – a person who is generally responsible for the design and/or construction of public buildings in the state. Architectural designer – generally, a designer involved in architecture but not qualified as an architect. Architectural engineer Architectural technologist or building technologist – a professional trained in architectural technology, building design and construction, and who provides building design services. Building control officer or Approved Inspector Building inspector Clerk of works Drafter or draughtsman – a person trained in drawing up architectural drawings. Site manager Building surveyor People engaged in architecture Architects Architecture firms Architectural historians Architecture critics Architectural styles Architectural style – a specific way of building, characterized by the features that make it notable. A style may include such elements as form, method of construction, materials, and regional character. Influential contemporary and relatively recent styles include : Modern architecture – generally characterized by simplification of form and the absence of applied ornament. Postmodern architecture – has been described as the return of "wit, ornament and reference" to architecture in response to the formalism of the International Style of modernism. Deconstructivism – based on the more general theory of deconstruction, a design style characterized by fragmentation, distortion and dislocation of structure and envelope. International style or international modern– the pervasive and often anonymous style of city developments worldwide. Brutalism – the notorious use of raw concrete and massive uncompromising forms. Specialist subclassifications of architecture Terms used to describe different architectural concerns, origins and objectives. Architecture parlante ("speaking architecture") – buildings or architectural elements that explain their own function or identity by means of an inscription or literal representation. Religious architecture – the design and construction of places of worship. Responsive architecture – designing buildings that measure their environmental conditions (via sensors) to adapt their form, shape, color or character responsively (via actuators). Sustainable architecture – environmentally conscious design techniques in the field of architecture. Vernacular architecture – traditional local building styles, typically not designed by professional architects although vernacular elements are adopted by many architects. Architectural theory Architectural design values – the various values that influence architects and designers in making design decisions. Mathematics and architecture – have always been close, because architecture relies upon mathematical precision, and because both fields share a search for order and beauty. Pattern language – a term coined by architect Christopher Alexander, a structured method of describing good design practices within a field of expertise. Proportion – the relationship between elements and the whole. Space syntax – a set of theories and techniques for the analysis of spatial configurations. Architecture criticism – published or broadcast critique, assessing the architect's success in meeting his own aims and objectives and those of others. Architectural terms Glossary of architecture Regional architecture Architecture of Africa Architecture of Africa Architecture of Algeria Architecture of Angola Ancient Egyptian architecture Architecture of Cape Verde Architecture of Ethiopia Architecture of Ghana Architecture of Madagascar Architecture of Mali Architecture of Morocco Architecture of Casablanca Architecture of Fez Architecture of Marrakesh Architecture of Nigeria Architecture of Lagos Architecture of Somalia Architecture of South Africa Architecture of Johannesburg Cape Dutch architecture Architecture of Zimbabwe Architecture of Asia Architecture of Asia Architecture of Afghanistan Armenian architecture Architecture of Azerbaijan Architecture of Baku Architecture of Bahrain Architecture of Bangladesh Architecture of Bhutan Dzong architecture Architecture of Cambodia Architecture of China Architecture of Hong Kong Architecture of Cyprus Architecture of Georgia (country) Architecture of India Architecture of Bengal Architecture of Gujarat Architecture of Karnataka Architecture of Kerala Architecture of Maharashtra Architecture of Rajasthan Architecture of Tamil Nadu Architecture of Uttar Pradesh Architecture of Telangana Architecture of Indonesia Architecture of Sumatra Balinese architecture Architecture of Iran Architecture of Tehran Architecture of Israel Architecture of Japan Architecture of Tokyo Okinawan architecture Architecture of Jordan Architecture of Korea Architecture of North Korea Architecture of South Korea Architecture of Kuwait Architecture of Lebanon Architecture of Macau Architecture of Malaysia Architecture of Kuala Lumpur Architecture of Penang Architecture of Mongolia Architecture of Myanmar Architecture of Nepal Architecture of Kathmandu Architecture of Pakistan Architecture of the Philippines Architecture of Russia Architecture of Saudi Arabia Architecture of Singapore Architecture of Sri Lanka Architecture of Taiwan Architecture of Thailand Architecture of Tibet Architecture of Turkey Architecture of the United Arab Emirates Architecture of Uzbekistan Architecture of Vietnam Architecture of Yemen Architecture of Europe Architecture of Europe Architecture of Albania Architecture of Austria Architecture of Azerbaijan Architecture of Belgium Architecture of Bosnia and Herzegovina Architecture of Mostar Architecture of Bulgaria Architecture of Croatia Architecture of the Czech Republic Architecture of Denmark Architecture of Aarhus Architecture in Copenhagen Architecture of Estonia Architecture of Finland Architecture of France Architecture of Normandy Architecture of Paris Architecture of the Paris Métro Paris architecture of the Belle Époque Architecture of Provence Architecture of Georgia (country) Architecture of Germany Architecture of Berlin Architecture of Munich Architecture of Greece Modern architecture in Athens Architecture of Hungary Architecture of Iceland Architecture of Ireland Architecture of Letterkenny Architecture of Limerick Architecture of Italy Architecture of Rome Italian modern and contemporary architecture Architecture of Lithuania Architecture of Luxembourg Architecture of Malta Architecture of Moldova Architecture of Monaco Architecture of Montenegro Architecture of the Netherlands Architecture of North Macedonia Architecture of Norway Architecture of Poland Architecture of Warsaw Architecture of Portugal Architecture of Romania Architecture of Russia Architecture of Serbia Architecture of Belgrade Architecture of Slovenia Architecture of Spain Architecture of Barcelona Architecture of Madrid Architecture of Sweden Architecture of Stockholm Architecture of Switzerland Architecture of Turkey Architecture of Istanbul Architecture of Ukraine Architecture of the United Kingdom Architecture of England Architecture of Aylesbury Architecture of Birmingham Architecture of Leeds Architecture of Liverpool Architecture of London Architecture of the London Borough of Croydon Architecture of Manchester Buildings and architecture of Bath Architecture of Scotland Architecture of Aberdeen Architecture of Glasgow Architecture of Ireland Architecture of Wales Architecture of Cardiff Architecture of Vatican City Dependencies, autonomies, and other territories Architecture of Gibraltar Architecture of Kosovo Architecture of Peć Architecture of North America Architecture of Antigua and Barbuda Architecture of the Bahamas Architecture of Barbados Architecture of Belize Architecture of Canada Architecture of Montreal Architecture of Ottawa Architecture of Quebec City Architecture of St. John's Architecture of Toronto Architecture of Vancouver Architecture of Costa Rica Architecture of Cuba Architecture of Havana Architecture of Dominica Architecture of the Dominican Republic Architecture of El Salvador Architecture of Grenada Architecture of Guatemala Architecture of Haiti Architecture of Honduras Architecture of Jamaica Architecture of Mexico Architecture of Nicaragua Architecture of Panama Architecture of Panama City Architecture of Puerto Rico Architecture of Saint Kitts and Nevis Architecture of Saint Lucia Architecture of Saint Vincent and the Grenadines Architecture of Trinidad and Tobago Architecture of the United States Architecture of Albany, New York Buildings and architecture of Allentown, Pennsylvania Architecture of Atlanta Architecture of Buffalo, New York Architecture of Chicago Architecture of metropolitan Detroit Architecture of Fredericksburg, Texas Architecture of Houston Architecture of Jacksonville Architecture of Kansas City Architecture of Las Vegas Architecture of Los Angeles Architecture of Miami Buildings and architecture of New Orleans Architecture of New York City Architecture in Omaha, Nebraska Architecture of Philadelphia Architecture of Plymouth, Pennsylvania Architecture of Portland, Oregon Architecture of San Antonio Architecture of San Francisco Architecture of Seattle Architecture of St. Louis Dependencies and other territories Architecture of Anguilla Architecture of Aruba Architecture of Bermuda Architecture of the British Virgin Islands Architecture of the Cayman Islands Architecture of Greenland Architecture of Guadeloupe Architecture of Martinique Architecture of Montserrat Architecture of Navassa Island Architecture of the Netherlands Antilles Architecture of Saint Barthélemy Architecture of Saint Pierre and Miquelon Architecture of the Turks and Caicos Islands Architecture of the United States Virgin Islands Architecture of Oceania Architecture of Oceania Australia Architecture of Australia Architecture of Bathurst, New South Wales Architecture of Melbourne Architecture of Sydney Melanesia Architecture of Fiji Micronesia Polynesia Architecture of New Zealand Architecture of Samoa Architecture of South America Architecture of South America Architecture of Argentina Architecture of Bolivia Architecture of Brazil Architecture of Chile Architecture of Colombia Architecture of Panama Architecture of Paraguay Architecture of Peru Architecture of Trinidad and Tobago Territories Architecture of Aruba Architecture of the Falkland Islands History of architecture Neolithic architecture – architecture of the last part of the Stone Age, and of the people of the Americas and the Pacific up until the time of European contact. Ancient Egyptian architecture – architecture of ancient Egypt, which developed a vast array of diverse structures and great architectural monuments along the Nile, among the largest and most famous of which are the Great Pyramid of Giza and the Great Sphinx of Giza. Achaemenid architecture – the architectural achievements of the Achaemenid Persians manifesting in construction of complex cities (Perspepolis, Susa, Ecbatana), temples made for worship and social gatherings (such as Zoroastrian temples), and mausoleums erected in honor of fallen kings (such as the burial tomb of Cyrus the Great). Armenian architecture – an architectural style developed over the last 4,500 years of human habitation in the Armenian Highland (the eastern part of Asia Minor) and used principally by the Armenian people. Coptic architecture – the architecture of the Copts, who form the majority of Christians in Egypt. Dravidian architecture – a style of architecture thousands of years ago in the Southern part of the Indian subcontinent or South India, built by the Dravidian peoples. Maya architecture – the structures of the Maya civilization, which was established circa 2000 BC and continued until its conquest by the Spanish (in the 16th and 17th centuries). Some of its notable constructions include ceremonial platforms, palaces, E-Groups, pyramids, temples, observatories, and ballcourts. Sumerian architecture – the ancient architecture of the region of the Tigris–Euphrates river system (also known as Mesopotamia). Ancient Greek architecture – the architecture of ancient Greece, where the classical orders were developed, establishing a precedent for the subsequent development of classical architecture. Ancient Roman architecture – adopted the principles of ancient Greek architecture and developed both new decorative forms, and much more complex building forms, notably adopting the use of arches and vaults. Buddhist architecture – developed by the worshipers of Buddha in South Asia in the 3rd century BCE, and associated with three types of structures: monasteries (viharas), stupas, and temples (Chaitya ). Inca architecture – the pre-Columbian architecture of the Incas in South America, known particularly for its exceptionally precise masonry. Sassanid architecture – the Persian architectural style that reached a peak in its development during the Sassanid era. Mesoamerican architecture – the set of architectural traditions produced by pre-Columbian cultures and civilizations of Mesoamerica, best known in the form of public, ceremonial and urban monumental buildings and structures. Byzantine architecture – the architecture of the Byzantine Empire. Islamic architecture – encompasses a wide range of both secular and religious styles from the foundation of Islam to the present day, influencing the design and construction of buildings and structures in Islamic culture. Newa architecture – style of architecture used by the Newari people in the Kathmandu valley in Nepal, ranging from stupas and chaitya monastery buildings to courtyard structures and distinctive houses. Medieval architecture – a term used to represent various forms of architecture common in Medieval Europe. Romanesque architecture – an architectural style of Medieval Europe characterized by semi-circular arches. Gothic architecture – a style of architecture that flourished during the high and late medieval period. Iranian architecture – or Persian architecture is the historic architecture of Iran (Persia). Hoysala architecture – building style developed under the rule of the Hoysala Empire between the 11th and 14th centuries, in the region known today as Karnataka, a state of India. Vijayanagara Architecture – primarily a temple style found in Vijayanagara principality in India. Ottoman architecture – or Turkish architecture is the architecture of the Ottoman Empire which emerged in Bursa and Edirne in 14th and 15th centuries. Renaissance architecture – the architecture of the period between the early 15th and early 17th centuries in different regions of Europe, demonstrating a conscious revival and development of certain elements of ancient Greek and Roman thought and material culture. Classical architecture – architecture derived in part from the Greek and Roman architecture of classical antiquity, enriched by classicizing architectural practice in Europe since the Renaissance. Palladian architecture – a European architectural style derived from and inspired by the designs of the Venetian architect Andrea Palladio (1508–1580). Baroque architecture – the building style of the Baroque era, begun in late sixteenth century Italy, that took the Roman vocabulary of Renaissance architecture and used it in a new rhetorical and theatrical fashion, often to express the triumph of the Catholic Church and the absolutist state. Neoclassical architecture – an architectural style produced by the neoclassical movement which began in the mid-18th century, manifested both in its details as a reaction against the Rococo style of naturalistic ornament, and in its architectural formulas as an outgrowth of some classicizing features of Late Baroque. Victorian architecture – includes several architectural styles employed predominantly during the middle and late 19th century. Renaissance Revival architecture – nineteenth century revival style inspired by buildings of the Renaissance. Gothic Revival architecture – also called "Victorian Gothic" and "Neo-Gothic", an architectural movement that began in the late 1840s in England. Its popularity grew rapidly in the early 19th century, when increasingly serious and learned admirers of neo-Gothic styles sought to revive medieval forms, in contrast to the neoclassical styles prevalent at the time. Modern architecture – generally characterized by simplification of form and absence of ornament. Although now historical, the ubiquitous international style which predominates in cities worldwide remains a strong influence in contemporary architecture. Postmodern architecture – began as an international style the first examples of which are generally cited as being from the 1950s, but did not become a movement until the late 1970s and continues to influence present-day architecture. New Classical architecture – a movement for reapproaching traditional architecture language, that established since the 1980s. Contemporary architecture – the architecture of the 21st century. Buildings Although not all buildings are architecture, the term encompasses a huge range of building types, as summarised in the following list pages: List of building types List of buildings and structures List of human habitation forms Building construction Building design Materials Materiality (architecture) Building material List of building materials Structural elements Arch – a curved structure, often made up blocks or bricks, spanning across an opening and supporting the weight of structure above. Works by transferring vertical loads into compression forces. There are many arch shapes including semicircular, segmental, parabolic, pointed (gothic), three-point and flat arches. Beam (structure) – a straight structural member, typically wood or steel, capable of spanning from one support to another and supporting the weight of structure above. Works by resisting bending forces. Buttress – a short section of masonry built at right angles to a wall, to resist lateral forces. Cantilever – a projecting structure without visible means of support at the projecting end. Column or pillar – a relatively slender structural element, typically circular, square or polygonal in plan, that bears the weight of the structure above. Dome – a roof structure, typically hemispherical, constructed in a similar way to an arch. The plan shape may be circular, elliptical or polygonal, and the cross section shape can vary in the same ways as an arch. Doorway – opening in a wall, typically rectangular, providing means of access, usually with a gate or door to provide security and weather protection. Facade – an exterior face of a building, especially the front. Floor – typically a horizontal surface that is walked upon, to support the expected loads of the building. Foundation or footing – solid base usually below ground, upon which buildings and other structures are built. Works by spreading vertical loads over a sufficient area to ensure the structure will not subside. Lintel – a structural member spanning across the top of an opening. Unlike a beam, a lintel spans a relatively short distance which can be spanned by single block of stone of sufficient depth. Concrete, timber and steel lintels are also used in different types of construction. Pier (architecture) – loadbearing structure similar to a column, but more massive. Truss – a structure spanning in the same way as a beam, but using materials more efficiently by using triangulation to create a rigid structure. Typically timber or steel, used to support a pitched roof. Vault (architecture) – a curved masonry structure spanning in the same way as an arch, forming either a roof or support for a floor above. Wall – a linear structure enclosing the exterior of an area or building, or subdividing an internal space. A wall may be loadbearing or non-loadbearing. Window – an opening in a wall, typically rectangular, providing light and ventilation. Usually but not always glazed. Architectural education Professional requirements for architects – Students undertake specific vocational training in order to qualify as a professional architects. Training typically consists of one or more university degrees and a period of practical experience. In some countries, it is illegal to use the title architect without accredited qualifications. In the United Kingdom the Architects Registration Board exists solely to regulate membership of the profession. In the United States the National Architectural Accrediting Board regulates professional educational programs and the National Council of Architectural Registration Boards is an umbrella organization to recommend model laws, regulations, exams, and to facilitate reciprocity in licensure between different jurisdictions. Architectural education can involve degree types that do not directly result in licensing, such as Bachelor of Science in architecture, Bachelor of Arts in architecture, or a research PhD in architecture. Some of the qualifications specific to architectural licensing include: Bachelor of Architecture (B.Arch.) – undergraduate academic degree designed to satisfy the academic component of professional accreditation bodies, to be followed by a period of practical training prior to professional examination and registration. Master of Architecture (M.Arch.) – professional degree in architecture, qualifying the graduate to move through the various stages of professional accreditation (internship, exams) that result in receiving a license. Doctor of Architecture (D.Arch) – doctoral degree in the field of architecture, that can be completed after either a Bachelor of Architecture (B.Arch), Master of Architecture (M.Arch) degree or, in some cases, another degree. Architectural practice Architectural drawing or architect's drawing - a technical drawing of a building or building project. Architectural design competition - a specialist competition inviting architects to submit design proposals for a project. Architectural technology or building technology - is the application of technology to the design of buildings. It is a component of architecture and building engineering and is sometimes viewed as a distinct discipline or sub-category. Blueprint - an obsolete paper-based method of reproducing technical drawings producing a distinctive appearance, white lines on a blue background. The word is still in use as a by-word for a design solution ("a blueprint for future developments"). Brief (architecture) - a written statement of a client's requirements for a building project. Building code or building control - a set of rules that specify the minimum acceptable level of safety and environmental performance in building construction. Computer-aided architectural design (CAAD) - software based production of technical drawings and 3-d models Construction law - a branch of law that deals with matters relating to building construction. Cost accounting or cost management - a vital activity in connection with building, generally performed by a specialist quantity surveyor. Construction projects are notoriously subject to cost overruns, caused by changing circumstances or by failure to fully allow for foreseeable costs during budgeting. Project management - the process of managing all the activities involved in a construction project, including adherence to the design and local legislation, costs and payment, and verification of project completion. Architecture prizes Architecture prize – Architecture prizes are generally awarded for completed projects and are chosen from publicised or nominated works, not from submissions by the originating architect. The RIBA Royal Gold Medal has in fact been refused on a number of occasions. Aga Khan Award for Architecture (AKAA) – an architectural prize established by Aga Khan IV in 1977, awarded for achievements in design and planning in Islamic societies. AIA Gold Medal – awarded by the American Institute of Architects for a significant body of work 'of lasting influence on the theory and practice of architecture', first awarded 1907. European Union Prize for Contemporary Architecture, also known as the Mies van der Rohe Award – awarded jointly by the European Union and the Fundacia Mies van der Rohe, Barcelona, 'to acknowledge and reward quality architectural production in Europe'. Pritzker Architecture Prize – awarded annually to "a living architect whose built work demonstrates a combination of those qualities of talent, vision and commitment, which has produced consistent and significant contributions to humanity and the built environment through the art of architecture". Founded in 1979 by Jay A. Pritzker and his wife Cindy, the award is often referred to as the Nobel Prize of architecture. RIBA Royal Gold Medal – awarded annually since 1848 by the Royal Institute of British Architects for an individual's or group's substantial contribution to international architecture. It is given for a distinguished body of work rather than for one building. Carbuncle Cup – unlike the mainstream awards which reward perceived merit, this is awarded annually by the UK magazine Building Design to 'the ugliest building in the United Kingdom completed in the last 12 months'. more... Related fields Architectural conservation – repair and restoration of buildings, especially historic structures. Architectural technology – the application of technology to the design of buildings. Construction – the process of creating physical structures. Building construction – construction specific to buildings. Civil engineering – the design, construction and maintenance of the physical environment e.g. bridges, canals, dams, drainage systems and roads etc. Building services engineering – the design of heating, ventilation and cooling and other mechanical systems, electrical power and lighting. Structural engineering – the analysis and design of structures that support or resist loads. Sustainable design – provides expertise in improving the environmental performance of buildings. Interior design – the design of interior finishes and fittings. Urban design and urban planning (urban, city, and town planning) – a technical and legal process concerned with controlling the design of structures and the use of land. Virtual reality – technology used to simulate a virtual world See also Architectural glossary Index of architecture articles Table of years in architecture Timeline of architecture References External links Architecture.com, published by Royal Institute of British Architects Worldarchitecture.org, World Architecture Database Archdaily.com Recompilation of thousands of recent projects Architectural centers and museums in the world, list of links from the UIA arch-library Building materials Architecture Architecture Architecture Architecture
Outline of architecture
[ "Physics", "Engineering" ]
5,188
[ "Building engineering", "Construction", "Materials", "Building materials", "Architecture lists", "Matter", "Architecture" ]
2,652,836
https://en.wikipedia.org/wiki/UTM%20theorem
In computability theory, the theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function. The universal function is an abstract version of the universal Turing machine, thus the name of the theorem. Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem. Theorem The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined. The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, ... is an enumeration of the partial computable functions. The function in the statement of the theorem is called a universal function. References Theorems in theory of computation Computability theory
UTM theorem
[ "Mathematics" ]
254
[ "Mathematical logic", "Theorems in theory of computation", "Computability theory", "Mathematical theorems in theoretical computer science", "Mathematical logic stubs" ]
2,654,296
https://en.wikipedia.org/wiki/Uniform%20boundedness
In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family. Definition Real line and complex plane Let be a family of functions indexed by , where is an arbitrary set and is the set of real or complex numbers. We call uniformly bounded if there exists a real number such that Metric space In general let be a metric space with metric , then the set is called uniformly bounded if there exists an element from and a real number such that Examples Every uniformly convergent sequence of bounded functions is uniformly bounded. The family of functions defined for real with traveling through the integers, is uniformly bounded by 1. The family of derivatives of the above family, is not uniformly bounded. Each is bounded by but there is no real number such that for all integers References Mathematical analysis
Uniform boundedness
[ "Mathematics" ]
187
[ "Mathematical analysis" ]
2,654,847
https://en.wikipedia.org/wiki/Biopharmaceutical
A biopharmaceutical, also known as a biological medical product, or biologic, is any pharmaceutical drug product manufactured in, extracted from, or semisynthesized from biological sources. Different from totally synthesized pharmaceuticals, they include vaccines, whole blood, blood components, allergenics, somatic cells, gene therapies, tissues, recombinant therapeutic protein, and living medicines used in cell therapy. Biologics can be composed of sugars, proteins, nucleic acids, or complex combinations of these substances, or may be living cells or tissues. They (or their precursors or components) are isolated from living sources—human, animal, plant, fungal, or microbial. They can be used in both human and animal medicine. Terminology surrounding biopharmaceuticals varies between groups and entities, with different terms referring to different subsets of therapeutics within the general biopharmaceutical category. Some regulatory agencies use the terms biological medicinal products or therapeutic biological product to refer specifically to engineered macromolecular products like protein- and nucleic acid-based drugs, distinguishing them from products like blood, blood components, or vaccines, which are usually extracted directly from a biological source. Biopharmaceutics is pharmaceutics that works with biopharmaceuticals. Biopharmacology is the branch of pharmacology that studies biopharmaceuticals. Specialty drugs, a recent classification of pharmaceuticals, are high-cost drugs that are often biologics. The European Medicines Agency uses the term advanced therapy medicinal products (ATMPs) for medicines for human use that are "based on genes, cells, or tissue engineering", including gene therapy medicines, somatic-cell therapy medicines, tissue-engineered medicines, and combinations thereof. Within EMA contexts, the term advanced therapies refers specifically to ATMPs, although that term is rather nonspecific outside those contexts. Gene-based and cellular biologics, for example, often are at the forefront of biomedicine and biomedical research, and may be used to treat a variety of medical conditions for which no other treatments are available. Building on the market approvals and sales of recombinant virus-based biopharmaceuticals for veterinary and human medicine, the use of engineered plant viruses has been proposed to enhance crop performance and promote sustainable production. In some jurisdictions, biologics are regulated via different pathways from other small molecule drugs and medical devices. Major classes Extracted from living systems Some of the oldest forms of biologics are extracted from the bodies of animals, and other humans especially. Important biologics include: Whole blood and other blood components Organ transplantation and tissue transplants Stem-cell therapy Antibodies for passive immunity (e.g., to treat a virus infection) Human reproductive cells Human breast milk Fecal microbiota Some biologics that were previously extracted from animals, such as insulin, are now more commonly produced by recombinant DNA. Produced by recombinant DNA Biologics can refer to a wide range of biological products in medicine. However, in most cases, the term is used more restrictively for a class of therapeutics (either approved or in development) that are produced using biological processes involving recombinant DNA technology. These medications are usually one of three types: Substances that are (nearly) identical to the body's key signaling proteins. Examples are the blood-production stimulating protein erythropoetin, or the growth-stimulating hormone named "growth hormone" or biosynthetic human insulin and its analogues. Monoclonal antibodies. These are similar to the antibodies that the human immune system uses to fight off bacteria and viruses, but they are "custom-designed" (using hybridoma technology or other methods) and can therefore be made specifically to counteract or block any given substance in the body, or to target any specific cell type; examples of such monoclonal antibodies for use in various diseases are given in the table below. Receptor constructs (fusion proteins), usually based on a naturally occurring receptor linked to the immunoglobulin frame. In this case, the receptor provides the construct with detailed specificity, whereas the immunoglobulin structure imparts stability and other useful features in terms of pharmacology. Some examples are listed in the table below. Biologics as a class of medications in this narrower sense have had a profound impact on many medical fields, primarily rheumatology and oncology, but also cardiology, dermatology, gastroenterology, neurology, and others. In most of these disciplines, biologics have added major therapeutic options for treating many diseases, including some for which no effective therapies were available, and others where previously existing therapies were inadequate. However, the advent of biologic therapeutics has also raised complex regulatory issues (see below), and significant pharmacoeconomic concerns because the cost for biologic therapies has been dramatically higher than for conventional (pharmacological) medications. This factor has been particularly relevant since many biological medications are used to treat chronic diseases, such as rheumatoid arthritis or inflammatory bowel disease, or for the treatment of otherwise untreatable cancer during the remainder of life. The cost of treatment with a typical monoclonal antibody therapy for relatively common indications is generally in the range of €7,000–14,000 per patient per year. Older patients who receive biologic therapy for diseases such as rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis are at increased risk for life-threatening infection, adverse cardiovascular events, and malignancy. The first such substance approved for therapeutic use was biosynthetic "human" insulin made via recombinant DNA. Sometimes referred to as rHI, under the trade name Humulin, was developed by Genentech, but licensed to Eli Lilly and Company, who manufactured and marketed it starting in 1982. Major kinds of biopharmaceuticals include: Blood factors (Factor VIII and Factor IX) Thrombolytic agents (tissue plasminogen activator) Hormones (insulin, glucagon, growth hormone, gonadotrophins) Haematopoietic growth factors (Erythropoietin, colony-stimulating factors) Interferons (Interferons-α, -β, -γ) Interleukin-based products (Interleukin-2) Vaccines (Hepatitis B surface antigen) Monoclonal antibodies (Various) Additional products (tumour necrosis factor, therapeutic enzymes) Research and development investment in new medicines by the biopharmaceutical industry stood at $65.2 billion in 2008. A few examples of biologics made with recombinant DNA technology include: Vaccines Many vaccines are grown in tissue cultures. Gene therapy Viral gene therapy involves artificially manipulating a virus to include a desirable piece of genetic material. Viral gene therapies using engineered plant viruses have been proposed to enhance crop performance and promote sustainable production. Biosimilars With the expiration of many patents for blockbuster biologics between 2012 and 2019, the interest in biosimilar production, i.e., follow-on biologics, has increased. Compared to small molecules that consist of chemically identical active ingredients, biologics are vastly more complex and consist of a multitude of subspecies. Due to their heterogeneity and the high process sensitivity, originators and follow-on biosimilars will exhibit variability in specific variants over time. The safety and clinical performance of both originator and biosimilar biopharmaceuticals must remain equivalent throughout their lifecycle. Process variations are monitored by modern analytical tools (e.g., liquid chromatography, immunoassays, mass spectrometry, etc.) and describe a unique design space for each biologic. Biosimilars require a different regulatory framework compared to small-molecule generics. Legislation in the 21st century has addressed this by recognizing an intermediate ground of testing for biosimilars. The filing pathway requires more testing than for small-molecule generics, but less testing than for registering completely new therapeutics. In 2003, the European Medicines Agency introduced an adapted pathway for biosimilars, termed similar biological medicinal products. This pathway is based on a thorough demonstration of comparability of the product to an existing approved product. Within the United States, the Patient Protection and Affordable Care Act of 2010 created an abbreviated approval pathway for biological products shown to be biosimilar to, or interchangeable with, an FDA-licensed reference biological product. Researchers are optimistic that the introduction of biosimilars will reduce medical expenses to patients and the healthcare system. Commercialization When a new biopharmaceutical is developed, the company will typically apply for a patent, which is a grant to exclusive manufacturing rights. This is the primary means by which the drug developer can recover the investment cost for development of the biopharmaceutical. The patent laws in the United States and Europe differ somewhat on the requirements for a patent, with the European requirements perceived as more difficult to satisfy. The total number of patents granted for biopharmaceuticals has risen significantly since the 1970s. In 1978 the total patents granted was 30. This had climbed to 15,600 in 1995, and by 2001 there were 34,527 patent applications. In 2012 the US had the highest IP (Intellectual Property) generation within the biopharmaceutical industry, generating 37 percent of the total number of granted patents worldwide; however, there is still a large margin for growth and innovation within the industry. Revisions to the current IP system to ensure greater reliability for R&D (research and development) investments is a prominent topic of debate in the US as well. Blood products and other human-derived biologics such as breast milk have highly regulated or very hard-to-access markets; therefore, customers generally face a supply shortage for these products. Institutions housing these biologics, designated as 'banks', often cannot distribute their product to customers effectively. Conversely, banks for reproductive cells are much more widespread and available due to the ease with which spermatozoa and egg cells can be used for fertility treatment. Large-scale production Biopharmaceuticals may be produced from microbial cells (e.g., recombinant E. coli or yeast cultures), mammalian cell lines (see Cell culture) and plant cell cultures (see Plant tissue culture) and moss plants in bioreactors of various configurations, including photo-bioreactors. Important issues of concern are cost of production (low-volume, high-purity products are desirable) and microbial contamination (by bacteria, viruses, mycoplasma). Alternative platforms of production which are being tested include whole plants (plant-made pharmaceuticals). Transgenics A potentially controversial method of producing biopharmaceuticals involves transgenic organisms, particularly plants and animals that have been genetically modified to produce drugs. This production is a significant risk for its investor due to production failure or scrutiny from regulatory bodies based on perceived risks and ethical issues. Biopharmaceutical crops also represent a risk of cross-contamination with non-engineered crops, or crops engineered for non-medical purposes. One potential approach to this technology is the creation of a transgenic mammal that can produce the biopharmaceutical in its milk, blood, or urine. Once an animal is produced, typically using the pronuclear microinjection method, it becomes efficacious to use cloning technology to create additional offspring that carry the favorable modified genome. The first such drug manufactured from the milk of a genetically modified goat was ATryn, but marketing permission was blocked by the European Medicines Agency in February 2006. This decision was reversed in June 2006 and approval was given August 2006. Regulation European Union In the European Union, a biological medicinal product is one of the active substance(s) produced from or extracted from a biological (living) system, and requires, in addition to physicochemical testing, biological testing for full characterisation. The characterisation of a biological medicinal product is a combination of testing the active substance and the final medicinal product together with the production process and its control. For example: Production process – it can be derived from biotechnology or from other technologies. It may be prepared using more conventional techniques as is the case for blood or plasma-derived products and a number of vaccines. Active substance – consisting of entire microorganisms, mammalian cells, nucleic acids, proteinaceous, or polysaccharide components originating from a microbial, animal, human, or plant source. Mode of action – therapeutic and immunological medicinal products, gene transfer materials, or cell therapy materials. United States In the United States, biologics are licensed through the biologics license application (BLA), then submitted to and regulated by the FDA's Center for Biologics Evaluation and Research (CBER) whereas drugs are regulated by the Center for Drug Evaluation and Research. Approval may require several years of clinical trials, including trials with human volunteers. Even after the drug is released, it will still be monitored for performance and safety risks. The manufacture process must satisfy the FDA's "Good Manufacturing Practices", which are typically manufactured in a cleanroom environment with strict limits on the amount of airborne particles and other microbial contaminants that may alter the efficacy of the drug. Canada In Canada, biologics (and radiopharmaceuticals) are reviewed through the Biologics and Genetic Therapies Directorate within Health Canada. See also Antibody-drug conjugate Genetic engineering Host cell protein List of pharmaceutical companies List of recombinant proteins Nanomedicine References External links Biotechnology products Biotechnology Life sciences industry Pharmaceutical industry Pharmacy Specialty drugs
Biopharmaceutical
[ "Chemistry", "Biology" ]
2,882
[ "Pharmacology", "Specialty drugs", "Biotechnology products", "Life sciences industry", "Pharmacy", "Pharmaceutical industry", "Biotechnology", "nan", "Biopharmaceuticals" ]
2,655,094
https://en.wikipedia.org/wiki/Southwestern%20blot
The southwestern blot, is a lab technique that involves identifying as well as characterizing DNA-binding proteins by their ability to bind to specific oligonucleotide probes. Determination of molecular weight of proteins binding to DNA is also made possible by the technique. The name originates from a combination of ideas underlying Southern blotting and Western blotting techniques of which they detect DNA and protein respectively. Similar to other types of blotting, proteins are separated by SDS-PAGE and are subsequently transferred to nitrocellulose membranes. Thereafter southwestern blotting begins to vary with regards to procedure as since the first blotting’s, many more have been proposed and discovered with goals of enhancing results. Former protocols were hampered by the need for large amounts of proteins and their susceptibility to degradation while being isolated. Southwestern blotting was first described by Brian Bowen, Jay Steinberg, U.K. Laemmli, and Harold Weintraub in 1979. During the time the technique was originally called "protein blotting". While there were existing techniques for purification of proteins associated with DNA, they often had to be used together to yield desired results. Thus, Bowen and colleagues sought to describe a procedure that could simplify the current methods of their time. Method Original Method To begin, proteins of interest are prepared for the SDS-PAGE technique and subsequently loaded onto the gel for separation on the basis of molecular size. Large proteins will have difficulty navigating through the mesh-like structure of the gel as they can not fit through the pores with the ease that smaller proteins can. As a result, large proteins do not travel very far on the gel in comparison to smaller proteins that travel further. After enough time, this results in distinct bands that can be visualized from a number of post-gel electrophoresis staining procedures. The bands are at different positions on the gel relative to the well that they were loaded into. Next, proteins are to be renatured followed by the gel being subjected to pressed between two nitrocellulose filters which rely on diffusion to transfer the proteins from the gel to the membrane filters. At this point replicas of the gel have been created of which each serves a particular purpose. One membrane filter can be stained to see the protein bands that were created from gel electrophoresis and the other is used in the actual process of hybridizing with prepared 32P radioactively labeled specific oligonucleotide probes. To detect any protein-DNA interactions, autoradiography is commonly used. Southwestern Blot Mapping "Southwestern blot mapping" is a time-efficient way of identifying DNA-binding proteins and specific sites on the genomic DNA that they interact with. First, proteins are prepared with a mixture that exposes them to the denaturing sodium dodecyl sulfate (SDS) agent. This exposure not only converts the proteins from a folded conformation to an unfolded conformation but also establishes uniform charge among them as well contributing to the ease of separation on a size basis using polyacrylamide gel (PAGE). Second, in contrast to the previous step, proteins on the resulting gel are to be renatured by removal of SDS. This serves to bring the proteins back to the form that ideally maximizes interactions later on in the procedure. Third, blotting takes place onto nitrocellulose membranes using methods for and properties of diffusion. Fourth, shifting to probe creation, particular restriction enzymes are chosen and used on the region of DNA under study to produce fragments of appropriate but different sizes. Fifth, the fragments are radioactively labeled and given appropriate time for binding to previously prepared blots. Once this time has elapsed, the blots are washed to remove any DNA that was not able to bind. Finally, the specifically-bound DNA is eluted from each individual protein-DNA complex and analyzed by another application of polyacrylamide gel electrophoresis. Results After time is allowed for binding with the oligonucleotide probes, the hope is that some of the proteins on the membrane filter have bound to the probes. Any probe that was not able to bind a protein needs to be removed. Once unbound probe removal has been taken care of, to better visualize the membrane filter, it is subjected to further varying procedures. By corresponding the resulting membrane filter to the second membrane filter that the gel was sandwiched between, the position of the protein in comparison to the molecular weight ladder gives information about the weight of the protein that bound to the probe. Method Modifications Instead of relying on standard diffusion to transfer the proteins from the gel to the filter, electroblotting is commonly used because it removes the denaturing agent SDS thereby allowing the proteins to renature as they move to the filter.   Skim milk is added to the filter before hybridizing with probes as it contains Bovine Serum Albumin (BSA) which prevents any unwanted or weak interactions of DNA to the nitrocellulose membrane. A rapid dimethylsulphate (DMS) protection assay can be used to identify non-specific binding vs. specific binding on blots. During the DNA probe hybridization step, defined amounts of salt are used to enhance specific interactions that occur between the DNA and proteins. Advantages, Disadvantages, Potential Advantages Given the ability of southwestern blotting towards studying the affinity of proteins towards binding to DNA, this information can further be used with regards to uncovering specific protein factors that bind to DNA as well. These protein factors may be involved in controlling gene expression. Unlike electrophoretic mobility shift and DNA foot printing, determination of molecular weight of unknown proteins that bind to DNA can occur.   Bowen and colleagues not only experimented and demonstrated a procedure for detecting DNA-binding proteins but also procedures for RNA-binding proteins as well as histone-binding proteins.   Results can be combined with mass spectrometry to assist in DNA-binding protein identification. Isoelectric point determination is possible through the use of 2D-SDS-PAGE instead of the standard one dimension. Disadvantages Since the technique involves the use of SDS-PAGE which utilizes the effect that sodium-dodecyl sulfate has on proteins which is to denature them, there is the possibility of dissociating protein factors that possess multiple subunits through the process. This could end up affecting how well the protein factor binds to DNA in later steps of the technique. Not all proteins renature during the transfer process to the nitrocellulose membranes after separation via SDS-PAGE. This area of protein renaturation is still being experimented with. Also being experimented with is reusability of southwestern blots to test proteins with a variety of DNA probes before disposal. However, the main challenge is putting together a scheme that outlines conditions that can remove previously used probes from the blot without the expense of denaturing the proteins or extracting them. Potential Overcoming the reusability challenge with the blots, can allow for the use of a particular oligonucleotide probe where the radioactive labeling is varied to study the degree of binding that mutants of the probe possess with proteins. Oligonucleotides are not able to penetrate the gel of SDS-PAGE to bind to the proteins which stresses the importance of having the blotting step. However, it may not be needed if "protein-resolving-oligonucleotide-permeable" gels are created. With such gels, oligonucleotide probes of interest could enter the gels and bind to the proteins minimizing steps of the technique while keeping results in one place. There is evidence that tissue-specific DNA binding proteins can be identified by use of southwestern blot mapping. Moreover, their sequence-specific binding allows the purification of the corresponding selectively bound DNA fragments and may improve protein-mediated cloning of DNA regulatory sequences. References Molecular biology Laboratory techniques Molecular biology techniques
Southwestern blot
[ "Chemistry", "Biology" ]
1,643
[ "Biochemistry", "Molecular biology techniques", "nan", "Molecular biology" ]
2,655,103
https://en.wikipedia.org/wiki/Tricalcium%20phosphate
Tricalcium phosphate (sometimes abbreviated TCP), more commonly known as Calcium phosphate, is a calcium salt of phosphoric acid with the chemical formula Ca3(PO4)2. It is also known as tribasic calcium phosphate and bone phosphate of lime (BPL). It is a white solid of low solubility. Most commercial samples of "tricalcium phosphate" are in fact hydroxyapatite. It exists as three crystalline polymorphs α, α′, and β. The α and α′ states are stable at high temperatures. Nomenclature Calcium phosphate refers to numerous materials consisting of calcium ions (Ca2+) together with orthophosphates (), metaphosphates or pyrophosphates () and occasionally oxide and hydroxide ions. Especially, the common mineral apatite has formula Ca5(PO4)3X, where X is F, Cl, OH, or a mixture; it is hydroxyapatite if the extra ion is mainly hydroxide. Much of the "tricalcium phosphate" on the market is actually powdered hydroxyapatite. Preparation Tricalcium phosphate is produced commercially by treating hydroxyapatite with phosphoric acid and slaked lime. It cannot be precipitated directly from aqueous solution. Typically double decomposition reactions are employed, involving a soluble phosphate and calcium salts, e.g. (NH4)2HPO4 + Ca(NO3)2. is performed under carefully controlled pH conditions. The precipitate will either be "amorphous tricalcium phosphate", ATCP, or calcium deficient hydroxyapatite, CDHA, Ca9(HPO4)(PO4)5(OH), (note CDHA is sometimes termed apatitic calcium triphosphate). Crystalline tricalcium phosphate can be obtained by calcining the precipitate. β-Ca3(PO4)2 is generally formed, higher temperatures are required to produce α-Ca3(PO4)2. An alternative to the wet procedure entails heating a mixture of a calcium pyrophosphate and calcium carbonate: CaCO3 + Ca2P2O7 → Ca3(PO4)2 + CO2 Structure of β-, α- and α′- Ca3(PO4)2 polymorphs Tricalcium phosphate has three recognised polymorphs, the rhombohedral β form (shown above), and two high temperature forms, monoclinic α and hexagonal α′. β-Tricalcium phosphate has a crystallographic density of 3.066 g cm−3 while the high temperature forms are less dense, α-tricalcium phosphate has a density of 2.866 g cm−3 and α′-tricalcium phosphate has a density of 2.702 g cm−3 All forms have complex structures consisting of tetrahedral phosphate centers linked through oxygen to the calcium ions. The high temperature forms each have two types of columns, one containing only calcium ions and the other both calcium and phosphate. There are differences in chemical and biological properties between the β and α forms, the α form is more soluble and biodegradable. Both forms are available commercially and are present in formulations used in medical and dental applications. Occurrence Calcium phosphate is one of the main combustion products of bone (see bone ash). Calcium phosphate is also commonly derived from inorganic sources such as mineral rock. Tricalcium phosphate occurs naturally in several forms, including: as a rock in Morocco, Israel, Philippines, Egypt, and Kola (Russia) and in smaller quantities in some other countries. The natural form is not completely pure, and there are some other components like sand and lime which can change the composition. The content of P2O5 in most calcium phosphate rocks is 30% to 40% P2O5 by weight. in the skeletons and teeth of vertebrate animals in milk. Biphasic calcium phosphate, BCP Biphasic calcium phosphate, BCP, was originally reported as tricalcium phosphate, but X-Ray diffraction techniques showed that the material was an intimate mixture of two phases, hydroxyapatite (HA) and β-tricalcium phosphate. It is a ceramic. Preparation involves sintering, causing irreversible decomposition of calcium deficient apatites alternatively termed non-stoichiometric apatites or basic calcium phosphate. An example is: Ca10−δ(PO4)6−δ(HPO4)δ(OH)2−δ → (1−δ) Ca10(PO4)6(OH)2 + 3δ Ca3(PO4)2 β-TCP can contain impurities, for example calcium pyrophosphate, and apatite. β-TCP is bioresorbable. The biodegradation of BCP involves faster dissolution of the β-TCP phase followed by elimination of HA crystals. β-TCP does not dissolve in body fluids at physiological pH levels, dissolution requires cell activity producing acidic pH. Uses Food additive Tricalcium phosphate is used in powdered spices as an anticaking agent, e.g. to prevent table salt from caking. The calcium phosphates have been assigned European food additive number E341. Health and beauty products It is also found in baby powder, antacids and toothpaste. Toothpastes with functionalized β-tricalcium phosphate (fTCP) may help remineralize tooth enamel. Biomedical It is also used as a nutritional supplement and occurs naturally in cow milk, although the most common and economical forms for supplementation are calcium carbonate (which should be taken with food) and calcium citrate (which can be taken without food). There is some debate about the different bioavailabilities of the different calcium salts. It can be used as a tissue replacement for repairing bony defects when autogenous bone graft is not feasible or possible. It may be used alone or in combination with a biodegradable, resorbable polymer such as polyglycolic acid. It may also be combined with autologous materials for a bone graft. Porous β-tricalcium phosphate scaffolds are employed as drug carrier systems for local drug delivery in bone. Natural occurrence Tuite, a natural analogue of tricalcium orthophosphate(V), is a rare component of some meteorites. Its formation is related to shock metamorphism. References Biomaterials Calcium compounds Phosphates E-number additives
Tricalcium phosphate
[ "Physics", "Chemistry", "Biology" ]
1,390
[ "Biomaterials", "Salts", "Materials", "Phosphates", "Matter", "Medical technology" ]
2,655,717
https://en.wikipedia.org/wiki/Heusler%20compound
Heusler compounds are magnetic intermetallics with face-centered cubic crystal structure and a composition of XYZ (half-Heuslers) or X2YZ (full-Heuslers), where X and Y are transition metals and Z is in the p-block. The term derives from the name of German mining engineer and chemist Friedrich Heusler, who studied such a compound (Cu2MnAl) in 1903. Many of these compounds exhibit properties relevant to spintronics, such as magnetoresistance, variations of the Hall effect, ferro-, antiferro-, and ferrimagnetism, half- and semimetallicity, semiconductivity with spin filter ability, superconductivity, topological band structure and are actively studied as thermoelectric materials. Their magnetism results from a double-exchange mechanism between neighboring magnetic ions. Manganese, which sits at the body centers of the cubic structure, was the magnetic ion in the first Heusler compound discovered. (See the Bethe–Slater curve for details of why this happens.) Styles of writing chemical formula Depending on the field of literature being surveyed, one might encounter the same compound referred to with different chemical formulas. An example of the most common difference is X2YZ versus XY2Z, where the labels of the two transition metals X and Y in the compound are swapped. The traditional convention X2YZ arises from the interpretation of Heuslers as intermetallics and is used predominantly in literature studying magnetic applications of Heuslers compounds. The XY2Z convention on the other hand is used mostly in thermoelectric materials and transparent conducting applications literature where semiconducting Heuslers (most half-Heuslers are semiconductors) are used. This convention, in which the left-most element on the periodic table comes first, uses the Zintl interpretation of semiconducting compounds where the chemical formula XY2Z is written in order of increasing electronegativity. In well-known compounds such as Fe2VAl which were historically thought of as metallic (semi-metallic) but were more recently shown to be small-gap semiconductors one might find both styles being used. In the present article semiconducting compounds might sometimes be mentioned in the XY2Z style. "Off-stoichiometric" Heuslers Although traditionally thought to form at compositions XYZ and X2YZ, studies published after 2015 have discovered and reliably predicted Heusler compounds with atypical compositions such as XY0.8Z and X1.5YZ. Besides these ternary compositions, quaternary Heusler compositions called the double Half-Heusler X2YY'Z2 (e.g. Ti2FeNiSb2) and triple Half-Heusler X2X'Y3Z3 (for e.g. Mg2VNi3Sb3) have also been discovered. These "off-stoichiometric" (that is, differing from the well-known XYZ and X2YZ compositions) Heuslers are mostly semiconductors in the low temperature T = 0 K limit. The stable compositions and corresponding electrical properties for these compounds can be quite sensitive to temperature and their order-disorder transition temperatures often occur below room-temperatures. Large amounts of defects at the atomic scale in off-stoichiometric Heuslers helps them achieve very low thermal conductivities and make them favorable for thermoelectric applications. The X1.5YZ semiconducting composition is stabilized by the transition metal X playing a dual role (electron donor as well as acceptor) in the structure. Half-Heusler thermoelectrics The half-Heusler compounds have distinctive properties and high tunability which makes the class very promising as thermoelectric materials. A study has predicted that there can be as many as 481 stable half-Heusler compounds using high-throughput ab initio calculation combine with machine learning techniques. The particular half-Heusler compounds of interest as thermoelectric materials (space group ) are the semiconducting ternary compounds with a general formula XYZ where X is a more electropositive transition metal (such as Ti or Zr), Y is a less electropositive transition metal (such Ni or Co), and Z is heavy main group element (such as Sn or Sb). This flexible range of element selection allows many different combinations to form a half-Heusler phase and enables a diverse range of material properties. Half-Heusler thermoelectric materials have distinct advantages over many other thermoelectric materials; low toxicity, inexpensive element, robust mechanical properties, and high thermal stability make half-Heusler thermoelectrics an excellent option for mid-high temperature application. However, the high thermal conductivity, which is intrinsic to highly symmetric HH structure, has made HH thermoelectric generally less efficient than other classes of TE materials. Many studies have focused on improving HH thermoelectric by reducing the lattice thermal conductivity and zT > 1 has been repeatedly recorded. Magnetic properties The magnetism of the early full-Heusler compound Cu2MnAl varies considerably with heat treatment and composition. It has a room-temperature saturation induction of around 8,000 gauss, which exceeds that of the element nickel (around 6100 gauss) but is smaller than that of iron (around 21500 gauss). For early studies see. In 1934, Bradley and Rogers showed that the room-temperature ferromagnetic phase was a fully ordered structure of the L21 Strukturbericht type. This has a primitive cubic lattice of copper atoms with alternate cells body-centered by manganese and aluminium. The lattice parameter is 5.95 Å. The molten alloy has a solidus temperature of about 910 °C. As it is cooled below this temperature, it transforms into disordered, solid, body-centered cubic beta-phase. Below 750 °C, a B2 ordered lattice forms with a primitive cubic copper lattice, which is body-centered by a disordered manganese-aluminium sublattice. Cooling below 610 °C causes further ordering of the manganese and aluminium sub-lattice to the L21 form. In non-stoichiometric alloys, the temperatures of ordering decrease, and the range of anealing temperatures, where the alloy does not form microprecipitates, becomes smaller than for the stoichiometric material. Oxley found a value of 357 °C for the Curie temperature, below which the compound becomes ferromagnetic. Neutron diffraction and other techniques have shown that a magnetic moment of around 3.7 Bohr magnetons resides almost solely on the manganese atoms. As these atoms are 4.2 Å apart, the exchange interaction, which aligns the spins, is likely indirect and is mediated through conduction electrons or the aluminium and copper atoms. Electron microscopy studies demonstrated that thermal antiphase boundaries (APBs) form during cooling through the ordering temperatures, as ordered domains nucleate at different centers within the crystal lattice and are often out of step with each other where they meet. The anti-phase domains grow as the alloy is annealed. There are two types of APBs corresponding to the B2 and L21 types of ordering. APBs also form between dislocations if the alloy is deformed. At the APB the manganese atoms will be closer than in the bulk of the alloy and, for non-stoichiometric alloys with an excess of copper (e.g. Cu2.2MnAl0.8), an antiferromagnetic layer forms on every thermal APB. These antiferromagnetic layers completely supersede the normal magnetic domain structure and stay with the APBs if they are grown by annealing the alloy. This significantly modifies the magnetic properties of the non-stoichiometric alloy relative to the stoichiometric alloy which has a normal domain structure. Presumably this phenomenon is related to the fact that pure manganese is an antiferromagnet although it is not clear why the effect is not observed in the stoichiometric alloy. Similar effects occur at APBs in the ferromagnetic alloy MnAl at its stoichiometric composition. Some Heusler compounds also exhibit properties of materials known as ferromagnetic shape-memory alloys. These are generally composed of nickel, manganese and gallium and can change their length by up to 10% in a magnetic field. Mechanical properties Understanding the mechanical properties of Heusler compounds is paramount for temperature-sensitive applications (e.g. thermoelectrics) for which some sub-classes of Heusler compounds are used. However, experimental studies are rarely encountered in literature. In fact, the commercialization of these compounds is limited by the material's ability to undergo intense, repetitive thermal cycling and resist cracking from vibrations. An appropriate measure for crack resistance is the material's toughness, which typically scales inversely with another important mechanical property: the mechanical strength. In this section, we highlight existing experimental and computational studies on the mechanical properties of Heusler alloys. Note that the mechanical properties of such a compositionally-diverse class of materials is expectedly dependent on the chemical composition of the alloys themselves, and therefore trends in mechanical properties are difficult to identify without a case-by-case study. The elastic modulus values of half-Heusler alloys range from 83 to 207 GPa, whereas the bulk modulus spans a tighter range from 100 GPa in HfNiSn to 130 GPa in TiCoSb. A collection of various density functional theory (DFT) calculations show that half-Heusler compounds are predicted to have a lower elastic, shear, and bulk modulus than in quaternary-, full-, and inverse-Hausler alloys. DFT also predicts a decrease in elastic modulus with temperature in Ni2XAl (X=Sc, Ti, V), as well as an increase in stiffness with pressure. The decrease in modulus with respect to temperature is also observed in TiNiSn, ZrNiSn, and HfNiSn, where ZrNiSn has the highest modulus and Hf has the lowest. This phenomenon can be explained by the fact that the elastic modulus decreases with increasing interatomic separation: as temperature increases, the atomic vibrations also increase, resulting in a larger equilibrium interatomic separation. The mechanical strength is also rarely studied in Heusler compounds. One study has shown that, in off-stoichiometric Ni2MnIn, the material reaches a peak strength of 475 MPa at 773 K, which drastically reduces to below 200 MPa at 973 K. In another study, a polycrystalline Heusler alloy composed of the Ni-Mn-Sn ternary composition space was found to possess a peak compressive strength of about 2000 MPa with plastic deformation up to 5%. However, the addition of Indium to the Ni-Mn-Sn ternary alloy not only increases the porosity of the samples, but it also reduces the compressive strength to 500 MPa. It is unclear from the study what percentage of the porosity increase from the indium addition reduces the strength. Note that this is opposite to the outcome expected from solid solution strengthening, where adding indium to the ternary system slows dislocation movement through dislocation-solute interaction and subsequently increases the material's strength. The fracture toughness can also be tuned with composition modifications. For example, the average toughness of Ti1−x(Zr, Hf)xNiSn ranges from 1.86 MPa m1/2 to 2.16 MPa m1/2, increasing with Zr/Hf content. The preparation of samples may affect the measured fracture toughness however, as elaborated by O’Connor et al. In their study, samples of Ti0.5Hf0.5Co0.5Ir0.5Sb1−xSnx were prepared using three different methods: a high-temperature solid state reaction, high-energy ball milling, and a combination of both. The study found higher fracture toughness in samples prepared without a high-energy ball milling step of 2.7 MPa m1/2 to 4.1 MPa m1/2, as opposed to samples that were prepared with ball milling of 2.2 MPa m1/2 to 3.0 MPa m1/2. Fracture toughness is sensitive to inclusions and existing cracks in the material, so it is as expected dependent on the sample preparation. Half-metallic ferromagnetic Heusler compounds Half-metallic ferromagnets exhibit a metallic behavior in one spin channel and an insulating behavior in the other spin channel. The first example of Heusler half-metallic ferromagnets was first investigated by de Groot et al., with the case of NiMnSb. Half-metallicity leads to the full polarization of the conducting electrons. Half metallic ferromagnets are therefore promising for spintronics applications. List of notable Heusler compounds Cu2MnAl, Cu2MnIn, Cu2MnSn Ni2MnAl, Ni2MnIn, Ni2MnSn, Ni2MnSb, Ni2MnGa Co2MnAl, Co2MnSi, Co2MnGa, Co2MnGe, Co2NiGa, Co2MnSn Pd2MnAl, Pd2MnIn, Pd2MnSn, Pd2MnSb Co2FeSi, Co2FeAl Fe2VAl Mn2VGa, Co2FeGe Co2CrxFe1−xX(X=Al, Si) YbBiPt References Further reading M Guezlane, H Baaziz, F El Haj Hassan, Z Charifi, Y Djaballah, "Electronic, magnetic and thermal properties of Co2CrxFe1− xX (X= Al, Si) Heusler alloys: First-principles calculations", Journal of Magnetism and Magnetic Materials, vol. 414, 2016, p. 219-226 (DOI https://doi.org/10.1016/j.jmmm.2016.04.056, External links National Pollutant Inventory – Copper and compounds fact sheet Copper alloys Intermetallics Magnetic alloys Ferromagnetic materials Spintronics Crystal structure types
Heusler compound
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,050
[ "Inorganic compounds", "Copper alloys", "Metallurgy", "Spintronics", "Ferromagnetic materials", "Electric and magnetic fields in matter", "Crystal structure types", "Materials science", "Magnetic alloys", "Materials", "Crystallography", "Intermetallics", "Condensed matter physics", "Alloys...
2,656,271
https://en.wikipedia.org/wiki/Evershed%20effect
The Evershed effect, named after the British astronomer John Evershed, is the radial flow of gas across the photospheric surface of the penumbra of sunspots from the inner border with the umbra towards the outer edge. The speed varies from around 1 km/s at the border between the umbra and the penumbra to a maximum of around double this in the middle of the penumbra and falls off to zero at the outer edge of the penumbra. Evershed first detected this phenomenon in January 1909, whilst working at the Kodaikanal Solar Observatory in India, when he found that the spectral lines of sunspots showed doppler shift. Afterwards, measurements of the spectral emission lines emitted in the ultraviolet wavelengths have shown a systematic red-shift. The Evershed effect is common to every spectral line formed at a temperature below 105 K; this fact would imply a constant downflow from the transition region towards the chromosphere. The observed velocity is about 5 km/s. Of course, this is impossible, since if it were true, the corona would disappear in a short time instead of being suspended over the Sun at temperatures of million degrees over distances much larger than a solar radius. Many theories have been proposed to explain this redshift in line profiles of the transition region, but the problem is still unsolved, since a coherent theory should take into account all the physical observations: UV line profiles are redshifted on average, but they show back and forth velocity oscillations at the same time. In synthesis, the proposed mechanisms are: siphon flows in coronal loops driven by a pressure difference, different cross-sections of the coronal loops footpoints, the return of spicules, multiple flows, nanoflares, and thermal instabilities during chromospheric condensation. The effect was commemorated in a postage stamp issued in India on 2 December 2008. See also Spectroscopy Plasma physics References Solar phenomena Emission spectroscopy Plasma phenomena
Evershed effect
[ "Physics", "Chemistry", "Astronomy" ]
407
[ "Spectroscopy stubs", "Physical phenomena", "Spectrum (physical sciences)", "Plasma physics", "Plasma phenomena", "Emission spectroscopy", "Astronomy stubs", "Solar phenomena", "Stellar phenomena", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
22,472,780
https://en.wikipedia.org/wiki/Quantum%20chromodynamics%20binding%20energy
Quantum chromodynamics binding energy (QCD binding energy), gluon binding energy or chromodynamic binding energy is the energy binding quarks together into hadrons. It is the energy of the field of the strong force, which is mediated by gluons. Motion-energy and interaction-energy contribute most of the hadron's mass. Source of mass Most of the mass of hadrons is actually QCD binding energy, through mass–energy equivalence. This phenomenon is related to chiral symmetry breaking. In the case of nucleons —protons and neutrons— QCD binding energy forms about 99% of the nucleon's mass. The kinetic energy of the hadron's constituents, moving at near the speed of light, contributes greatly to the hadron mass; otherwise most of the rest is actual QCD binding energy, which emerges in a complex way from the potential-like terms in the QCD Lagrangian. For protons, the sum of the rest masses of the three valence quarks (two up quarks and one down quark) is approximately , while the proton's total mass is about . In the standard model, this "quark current mass" can nominally be attributed to the Higgs interaction. For neutrons, the sum of the rest masses of the three valence quarks (two down quarks and one up quark) is approximately , while the neutron's total mass is about . Considering that nearly all of the atom's mass is concentrated in the nucleons, this means that about 99% of the mass of everyday matter (baryonic matter) is, in fact, chromodynamic binding energy. Gluon energy While gluons are massless, they still possess energy — chromodynamic binding energy. In this way, they are similar to photons, which are also massless particles carrying energy — photon energy. The amount of energy per single gluon, or "gluon energy", cannot be directly measured, though a distribution can by inferred from deep inelastic scattering (DIS) experiments (see ref [4] for an old but still valid introduction.) Unlike photon energy, which is quantifiable, described by the Planck–Einstein relation and depends on a single variable (the photon's frequency), no simple formula exists for the quantity of energy carried by each gluon. While the effects of a single photon can be observed, single gluons have not been observed outside of a hadron. A hadron is in totality composed of gluons, valence quarks, sea quarks and other virtual particles. The gluon content of a hadron can be inferred from DIS measurements. Again, not all of the QCD binding energy is gluon interaction energy, but rather, some of it comes from the kinetic energy of the hadron's constituents. Currently, the total QCD binding energy per hadron can be estimated through a combination of the factors mentioned. In the future, studies into quark–gluon plasma will better complement the DIS studies and improve our understanding of the situation. See also Gluon Quark Current quark and constituent quark Hadron Strong force Quantum chromodynamics Chiral symmetry breaking Photon energy Invariant mass and relativistic mass Binding energy References ° Halzen, Francis and Martin, John, "Quarks and Leptons:An Introductory Course in Modem Particle Physics", John Wiley & Sons (1984). Physical quantities Hadrons Binding Energy Binding energy
Quantum chromodynamics binding energy
[ "Physics", "Mathematics" ]
757
[ "Physical phenomena", "Matter", "Physical quantities", "Quantity", "Hadrons", "Physical properties", "Subatomic particles" ]
22,473,529
https://en.wikipedia.org/wiki/Additive%20map
In algebra, an additive map, -linear map or additive function is a function that preserves the addition operation: for every pair of elements and in the domain of For example, any linear map is additive. When the domain is the real numbers, this is Cauchy's functional equation. For a specific case of this definition, see additive polynomial. More formally, an additive map is a -module homomorphism. Since an abelian group is a -module, it may be defined as a group homomorphism between abelian groups. A map that is additive in each of two arguments separately is called a bi-additive map or a -bilinear map. Examples Typical examples include maps between rings, vector spaces, or modules that preserve the additive group. An additive map does not necessarily preserve any other structure of the object; for example, the product operation of a ring. If and are additive maps, then the map (defined pointwise) is additive. Properties Definition of scalar multiplication by an integer Suppose that is an additive group with identity element and that the inverse of is denoted by For any and integer let: Thus and it can be shown that for all integers and all and This definition of scalar multiplication makes the cyclic subgroup of into a left -module; if is commutative, then it also makes into a left -module. Homogeneity over the integers If is an additive map between additive groups then and for all (where negation denotes the additive inverse) and Consequently, for all (where by definition, ). In other words, every additive map is homogeneous over the integers. Consequently, every additive map between abelian groups is a homomorphism of -modules. Homomorphism of -modules If the additive abelian groups and are also a unital modules over the rationals (such as real or complex vector spaces) then an additive map satisfies: In other words, every additive map is homogeneous over the rational numbers. Consequently, every additive maps between unital -modules is a homomorphism of -modules. Despite being homogeneous over as described in the article on Cauchy's functional equation, even when it is nevertheless still possible for the additive function to be homogeneous over the real numbers; said differently, there exist additive maps that are of the form for some constant In particular, there exist additive maps that are not linear maps. See also Notes Proofs References Ring theory Morphisms Types of functions
Additive map
[ "Mathematics" ]
494
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Ring theory", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Types of functions", "Morphisms" ]
22,474,664
https://en.wikipedia.org/wiki/Algorithmic%20Lov%C3%A1sz%20local%20lemma
In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence. Given a finite set of bad events {A1, ..., An} in a probability space with limited dependence amongst the Ais and with specific bounds on their respective probabilities, the Lovász local lemma proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on how to avoid the bad events. If the events {A1, ..., An} are determined by a finite collection of mutually independent random variables, a simple Las Vegas algorithm with expected polynomial runtime proposed by Robin Moser and Gábor Tardos can compute an assignment to the random variables such that all events are avoided. Review of Lovász local lemma The Lovász Local Lemma is a powerful tool commonly used in the probabilistic method to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovász Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a bad event and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows: Let be a finite set of events in the probability space Ω. For let denote a subset of such that is independent from the collection of events . If there exists an assignment of reals to the events such that then the probability of avoiding all events in is positive, in particular Algorithmic version of the Lovász local lemma The Lovász Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in practice. Note that random sampling from the probability space Ω is likely to be inefficient, since the probability of the event of interest is only bounded by a product of small numbers and therefore likely to be very small. Under the assumption that all of the events in are determined by a finite collection of mutually independent random variables in Ω, Robin Moser and Gábor Tardos proposed an efficient randomized algorithm that computes an assignment to the random variables in such that all events in are avoided. Hence, this algorithm can be used to efficiently construct witnesses of complex objects with prescribed features for most problems to which the Lovász Local Lemma applies. History Prior to the recent work of Moser and Tardos, earlier work had also made progress in developing algorithmic versions of the Lovász Local Lemma. József Beck in 1991 first gave proof that an algorithmic version was possible. In this breakthrough result, a stricter requirement was imposed upon the problem formulation than in the original non-constructive definition. Beck's approach required that for each , the number of dependencies of A was bounded above with (approximately). The existential version of the Local Lemma permits a larger upper bound on dependencies: This bound is known to be tight. Since the initial algorithm, work has been done to push algorithmic versions of the Local Lemma closer to this tight value. Moser and Tardos's recent work are the most recent in this chain, and provide an algorithm that achieves this tight bound. Algorithm Let us first introduce some concepts that are used in the algorithm. For any random variable denotes the current assignment (evaluation) of P. An assignment (evaluation) to all random variables is denoted . The unique minimal subset of random variables in that determine the event A is denoted by vbl(A). If the event A is true under an evaluation , we say that satisfies A, otherwise it avoids A. Given a set of bad events we wish to avoid that is determined by a collection of mutually independent random variables , the algorithm proceeds as follows: : a random evaluation of P while such that A is satisfied by pick an arbitrary satisfied event : a new random evaluation of P return In the first step, the algorithm randomly initializes the current assignment vP for each random variable . This means that an assignment vP is sampled randomly and independently according to the distribution of the random variable P. The algorithm then enters the main loop which is executed until all events in are avoided, at which point the algorithm returns the current assignment. At each iteration of the main loop, the algorithm picks an arbitrary satisfied event A (either randomly or deterministically) and resamples all the random variables that determine A. Main theorem Let be a finite set of mutually independent random variables in the probability space Ω. Let be a finite set of events determined by these variables. If there exists an assignment of reals to the events such that then there exists an assignment of values to the variables avoiding all of the events in . Moreover, the randomized algorithm described above resamples an event at most an expected times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most The proof of this theorem using the method of entropy compression can be found in the paper by Moser and Tardos Symmetric version The requirement of an assignment function x satisfying a set of inequalities in the theorem above is complex and not intuitive. But this requirement can be replaced by three simple conditions: , i.e. each event A depends on at most D other events, , i.e. the probability of each event A is at most p, , where e is the base of the natural logarithm. The version of the Lovász Local Lemma with these three conditions instead of the assignment function x is called the Symmetric Lovász Local Lemma. We can also state the Symmetric Algorithmic Lovász Local Lemma: Let be a finite set of mutually independent random variables and be a finite set of events determined by these variables as before. If the above three conditions hold then there exists an assignment of values to the variables avoiding all of the events in . Moreover, the randomized algorithm described above resamples an event at most an expected times before it finds such an evaluation. Thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most . Example The following example illustrates how the algorithmic version of the Lovász Local Lemma can be applied to a simple problem. Let Φ be a CNF formula over variables X1, ..., Xn, containing n clauses, and with at least k literals in each clause, and with each variable Xi appearing in at most clauses. Then, Φ is satisfiable. This statement can be proven easily using the symmetric version of the Algorithmic Lovász Local Lemma. Let X1, ..., Xn be the set of mutually independent random variables which are sampled uniformly at random. Firstly, we truncate each clause in Φ to contain exactly k literals. Since each clause is a disjunction, this does not harm satisfiability, for if we can find a satisfying assignment for the truncated formula, it can easily be extended to a satisfying assignment for the original formula by reinserting the truncated literals. Now, define a bad event Aj for each clause in Φ, where Aj is the event that clause j in Φ is unsatisfied by the current assignment. Since each clause contains k literals (and therefore k variables) and since all variables are sampled uniformly at random, we can bound the probability of each bad event by Since each variable can appear in at most clauses and there are k variables in each clause, each bad event Aj can depend on at most other events. Therefore: multiplying both sides by ep we get: it follows by the symmetric Lovász Local Lemma that the probability of a random assignment to X1, ..., Xn satisfying all clauses in Φ is non-zero and hence such an assignment must exist. Now, the Algorithmic Lovász Local Lemma actually allows us to efficiently compute such an assignment by applying the algorithm described above. The algorithm proceeds as follows: It starts with a random truth value assignment to the variables X1, ..., Xn sampled uniformly at random. While there exists a clause in Φ that is unsatisfied, it randomly picks an unsatisfied clause C in Φ and assigns a new truth value to all variables that appear in C chosen uniformly at random. Once all clauses in Φ are satisfied, the algorithm returns the current assignment. Hence, the Algorithmic Lovász Local Lemma proves that this algorithm has an expected runtime of at most steps on CNF formulas that satisfy the two conditions above. A stronger version of the above statement is proven by Moser, see also Berman, Karpinski and Scott. The algorithm is similar to WalkSAT which is used to solve general boolean satisfiability problems. The main difference is that in WalkSAT, after the unsatisfied clause C is selected, a single variable in C is selected at random and has its value flipped (which can be viewed as selecting uniformly among only rather than all value assignments to C). Applications As mentioned before, the Algorithmic Version of the Lovász Local Lemma applies to most problems for which the general Lovász Local Lemma is used as a proof technique. Some of these problems are discussed in the following articles: Probabilistic proofs of non-probabilistic theorems Random graph Parallel version The algorithm described above lends itself well to parallelization, since resampling two independent events , i.e. , in parallel is equivalent to resampling A, B sequentially. Hence, at each iteration of the main loop one can determine the maximal set of independent and satisfied events S and resample all events in S in parallel. Under the assumption that the assignment function x satisfies the slightly stronger conditions: for some ε > 0 Moser and Tardos proved that the parallel algorithm achieves a better runtime complexity. In this case, the parallel version of the algorithm takes an expected steps before it terminates. The parallel version of the algorithm can be seen as a special case of the sequential algorithm shown above, and so this result also holds for the sequential case. References Probability theorems Combinatorics Lemmas
Algorithmic Lovász local lemma
[ "Mathematics" ]
2,148
[ "Discrete mathematics", "Mathematical theorems", "Combinatorics", "Theorems in probability theory", "Mathematical problems", "Lemmas" ]
22,476,314
https://en.wikipedia.org/wiki/Davies%20equation
The Davies equation is an empirical extension of Debye–Hückel theory which can be used to calculate activity coefficients of electrolyte solutions at relatively high concentrations at 25 °C. The equation, originally published in 1938, was refined by fitting to experimental data. The final form of the equation gives the mean molal activity coefficient of an electrolyte that dissociates into ions having charges and as a function of ionic strength : The second term, , goes to zero as the ionic strength goes to zero, so the equation reduces to the Debye–Hückel equation at low concentration. However, as concentration increases, the second term becomes increasingly important, so the Davies equation can be used for solutions too concentrated to allow the use of the Debye–Hückel equation. For 1:1 electrolytes the difference between measured values and those calculated with this equation is about 2% of the value for 0.1 M solutions. The calculations become less precise for electrolytes that dissociate into ions with higher charges. Further discrepancies will arise if there is association between the ions, with the formation of ion pairs, such as . See also Osmotic coefficient Pitzer equations References Thermodynamic equations Chemical thermodynamics Equilibrium chemistry Electrochemical equations
Davies equation
[ "Physics", "Chemistry", "Mathematics" ]
267
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamic equations", "Equations of physics", "Mathematical objects", "Equations", "Equilibrium chemistry", "Electrochemistry", "Thermodynamics", "Chemical thermodynamics", "Electrochemistry stubs", "Electrochemical equations" ]
19,855,700
https://en.wikipedia.org/wiki/Chemical%20force%20microscopy
In materials science, chemical force microscopy (CFM) is a variation of atomic force microscopy (AFM) which has become a versatile tool for characterization of materials surfaces. With AFM, structural morphology is probed using simple tapping or contact modes that utilize van der Waals interactions between tip and sample to maintain a constant probe deflection amplitude (constant force mode) or maintain height while measuring tip deflection (constant height mode). CFM, on the other hand, uses chemical interactions between functionalized probe tip and sample. Choice chemistry is typically gold-coated tip and surface with thiols attached, R being the functional groups of interest. CFM enables the ability to determine the chemical nature of surfaces, irrespective of their specific morphology, and facilitates studies of basic chemical bonding enthalpy and surface energy. Typically, CFM is limited by thermal vibrations within the cantilever holding the probe. This limits force measurement resolution to ~1 pN, which is still very suitable considering weak interactions are ~20 pN per pair. Hydrophobicity is used as the primary example throughout this consideration of CFM, but certainly any type of bonding can be probed with this method. Pioneering work CFM has been primarily developed by Charles Lieber at Harvard University in 1994. The method was demonstrated using hydrophobicity where polar molecules (e.g. COOH) tend to have the strongest binding to each other, followed by nonpolar (e.g. CH3-CH3) bonding, and a combination being the weakest. Probe tips are functionalized and substrates patterned with these molecules. All combinations of functionalization were tested, both by tip contact and removal as well as spatial mapping of substrates patterned with both moieties and observing the complementarity in image contrast. Both of these methods are discussed below. The AFM instrument used is similar to the one in Figure 1. Force of adhesion (tensile testing) This is the simpler mode of CFM operation where a functionalized tip is brought in contact with the surface and is pulled to observe the force at which separation occurs, (see Figure 2). The Johnson-Kendall-Roberts (JKR) theory of adhesion mechanics predicts this value as (1) where with being the radius of the tip, and being various surface energies between the tip, sample, and the medium each is in (liquids discussed below). is usually obtained from SEM and and from contact angle measurements on substrates with the given moieties. When the same functional groups are used, and which results in Doing this twice with two different moieties (e.g. and ) gives values of and , both of which can be used together in the same experiment to determine . Therefore, can be calculated for any combination of functionalities for comparison to CFM determined values. For similarly functionalized tip and surface, at tip detachment JKR theory also predicts a contact radius of (2) with an “effective” Young's modulus of the tip derived from the actual value and the Poisson ratio . If one knows the effective area of a single functional group, (e.g. from quantum chemistry simulations), the total number of ligands participating in tension can be estimated as As stated earlier, the force resolution of CFM does allow one to probe individual bonds of even the weakest variety, but tip curvature typically prevents this. Using Eq 2, a radius of curvature has been determined as the requirement to conduct tensile testing of individual linear moieties. A quick note to mention is the work corresponding to the hysteresis in the force profile (Figure 2) does not correlate to the bond energy. The work done in retracting the tip is approximated due to the linear behavior of deformation with being the force and being the displacement immediately before release. Using the results of Frisbie et al., normalized to the estimated 50 functional groups in contact, the work values are estimated as 39 eV, 0.25 eV, and 4.3 eV for , , and interactions, respectively. Roughly, intermolecular bond energies can be calculated by: being the boiling point. According to this, for formic acid, , and 9.73 meV for methane, , each value being about 3 orders of magnitude smaller than the experiment might suggest. Even if surface passivation with were considered (discussed below), the large error seems irrecoverable. The strongest hydrogen bonds are at most ~1 eV in energy. This strongly implies that the cantilever has a force constant smaller than or on the order of that for bond interactions and, therefore, it cannot be treated as perfectly rigid. This does open an avenue for increasing the usefulness of CFM if stiffer cantilevers can be used while still maintaining force resolution. Frictional force mapping Chemical interactions can also be used to map prepatterned substrates with varying functionalities (see Figure 3). Scanning of a surface having varying hydrophobicity with a tip having no functional groups attached would produce an image with no contrast because the surface is morphologically featureless (simple AFM operation). Functionalizing a tip to be hydrophilic would cause the cantilever to bend when the tip scans across hydrophilic portions of the substrate due to strong tip-substrate interactions. This is detected by laser deflection in a position sensitive detector, thereby producing a chemical profile image of the surface. Generally, a brighter area would correspond to a greater amplitude of deflection, so stronger bonding corresponds to lighter areas of a CFM image map. When the cantilever functionalization is switched such that the tip is bent when encountering hydrophobic areas of the substrate instead, the complementary image is observed. Frictional force response to the amount of perpendicular load applied by the tip on to the substrate is shown in Figure 4. Increasing tip-substrate interactions produce a steeper slope, as one would expect. Of experimental importance is the fact that contrast between different functionalities on the surface may be enhanced with an application of greater perpendicular force. Of course, this comes at the cost of potential damage to the substrate. Ambient: measurements in liquids Capillary force is a major problem in tensile force measurements since it effectively strengthens the tip-surface interaction. It is usually caused by adsorbed moisture on substrates from ambient environment. To eliminate this additional force, measurements in liquids can be conducted. With X-terminated tip and substrate in liquid L, the addition to Fad is calculated using Eq 1 with WXLX = 2γLL; that is, the extra force comes from the attraction of liquid molecules to each other. This is ~10 pN for EtOH, which still allows for the observation of even the weakest polar/nonpolar interactions (~20 pN). The choice of liquid is dependent on which interactions are of interest. When the solvent is immiscible with functional groups, larger than usual tip-surface bonding exists. Therefore, organic solvents are appropriate for studying van der Waals and hydrogen bonding, while electrolytes are best for probing hydrophobic and electrostatic forces. Applications in nanoscience A biological implementation of CFM at the nanoscale level is the unfolding of proteins with functionalized tip and surface (see Figure 5). Due to the increased contact area, the tip and the surface act as anchors holding protein bundles while they separate. As uncoiling ensues, the force required jumps, indicating various stages of uncoiling: (1) separation into bundles, (2) bundle separation into domains of crystalline protein held together by van der Waals forces, and (3) linearization of the protein upon overcoming the secondary bonding. Information on the internal structure of these complex proteins, as well as a better understanding of constituent interactions are provided with this method. A second consideration is one that takes advantage of unique nanoscale materials properties. The high aspect ratio of carbon nanotubes (easily >1000) is exploited to image surfaces with deep features. The use of the carbon material broadens the functionalization chemistry since there are countless routes to chemical modification of nanotube sidewalls (e.g. with diazonium, simple alkyls, hydrogen, ozone/oxygen, and amines). Multiwall nanotubes are typically used for their rigidity. Because of their approximately planar ends, one can estimate the number of functional groups that are in contact with the substrate knowing tube diameter and number of walls, which helps in determining single moiety tensile properties. Certainly, this method has obvious implications in tribology as well. References Microscopy Scanning probe microscopy
Chemical force microscopy
[ "Chemistry", "Materials_science" ]
1,771
[ "Nanotechnology", "Scanning probe microscopy", "Microscopy" ]
837,770
https://en.wikipedia.org/wiki/Entropy%20of%20fusion
In thermodynamics, the entropy of fusion is the increase in entropy when melting a solid substance. This is almost always positive since the degree of disorder increases in the transition from an organized crystalline solid to the disorganized structure of a liquid; the only known exception is helium. It is denoted as and normally expressed in joules per mole-kelvin, J/(mol·K). A natural process such as a phase transition will occur when the associated change in the Gibbs free energy is negative. where is the enthalpy of fusion. Since this is a thermodynamic equation, the symbol refers to the absolute thermodynamic temperature, measured in kelvins (K). Equilibrium occurs when the temperature is equal to the melting point so that and the entropy of fusion is the heat of fusion divided by the melting point: Helium Helium-3 has a negative entropy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative entropy of fusion below 0.8 K. This means that, at appropriate constant pressures, these substances freeze with the addition of heat. See also Entropy of vaporization Notes References Thermodynamic entropy Thermodynamic properties
Entropy of fusion
[ "Physics", "Chemistry", "Mathematics" ]
251
[ "Thermodynamic properties", "Physical quantities", "Quantity", "Thermodynamic entropy", "Entropy", "Thermodynamics", "Statistical mechanics" ]
837,875
https://en.wikipedia.org/wiki/Malliavin%20calculus
In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations. Malliavin calculus is named after Paul Malliavin whose ideas led to a proof that Hörmander's condition implies the existence and smoothness of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. The calculus has been applied to stochastic partial differential equations as well. The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications in, for example, stochastic filtering. Overview and history Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations. Gaussian probability space Consider a Wiener functional (a functional from the classical Wiener space) and consider the task of finding a derivative for it. The natural idea would be to use the Gateaux derivative however this does not always exist. Therefore it does make sense to find a new differential calculus for such spaces by limiting the directions. The toy model of Malliavin calculus is an irreducible Gaussian probability space . This is a (complete) probability space together with a closed subspace such that all are mean zero Gaussian variables and . If one chooses a basis for then one calls a numerical model. On the other hand, for any separable Hilbert space exists a canonical irreducible Gaussian probability space named the Segal model (named after Irving Segal) having as its Gaussian subspace. In this case for a one notates the associated random variable in as . Properties of a Gaussian probability space that do not depend on the particular choice of basis are called intrinsic and such that do depend on the choice extrensic. We denote the countably infinite product of real spaces as . Recall the modern version of the Cameron-Martin theorem Consider a locally convex vector space with a cylindrical Gaussian measure on it. For an element in the topological dual define the distance to the mean which is a map , and denote the closure in as Let denote the translation by . Then respectively the covariance operator on it induces a reproducing kernel Hilbert space called the Cameron-Martin space such that for any there is equivalence . In fact one can use here the Feldman–Hájek theorem to find that for any other such measure would be singular. Let be the canonical Gaussian measure, by transferring the Cameron-Martin theorem from into a numerical model , the additive group of will define a quasi-automorphism group on . A construction can be done as follows: choose an orthonormal basis in , let denote the translation on by , denote the map into the Cameron-Martin space by , denote and we get a canonical representation of the additive group acting on the endomorphisms by defining One can show that the action of is extrinsic meaning it does not depend on the choice of basis for , further for and for the infinitesimal generator of that where is the identity operator and denotes the multiplication operator by the random variable (acting on the endomorphisms). In the case of an arbitrary Hilbert space and the Segal model one has (and thus . Then the limit above becomes the multiplication operator by the random variable associated to . For and one now defines the directional derivative Given a Hilbert space and a Segal model with its Gaussian space . One can now deduce for the integration by parts formula . Invariance principle The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number ε and integrable function f, the following holds and hence This can be used to derive the integration by parts formula since, setting f = gh, it implies A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let be a square-integrable predictable process and set If is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle: Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula: Here, the left-hand side is the Malliavin derivative of the random variable in the direction and the integral appearing on the right hand side should be interpreted as an Itô integral. Clark–Ocone formula One of the most useful results from Malliavin calculus is the Clark–Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows: Consider the standard Wiener measure on the canonical space , equipped with its canonical filtration. For satisfying which is Lipschitz and such that F has a strong derivative kernel, in the sense that for in C[0,1] then where H is the previsible projection of F'(x, (t,1]) which may be viewed as the derivative of the function F with respect to a suitable parallel shift of the process X over the portion (t,1] of its domain. This may be more concisely expressed by Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals F by replacing the derivative kernel used above by the "Malliavin derivative" denoted in the above statement of the result. Skorokhod integral The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative in the white noise case when the Hilbert space is an space, thus for u in the domain of the operator which is a subset of , for F in the domain of the Malliavin derivative, we require where the inner product is that on viz The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces. It can be shown that if u is adapted then where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands. Applications The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering. References Kusuoka, S. and Stroock, D. (1981) "Applications of Malliavin Calculus I", Stochastic Analysis, Proceedings Taniguchi International Symposium Katata and Kyoto 1982, pp 271–306 Kusuoka, S. and Stroock, D. (1985) "Applications of Malliavin Calculus II", J. Faculty Sci. Uni. Tokyo Sect. 1A Math., 32 pp 1–76 Kusuoka, S. and Stroock, D. (1987) "Applications of Malliavin Calculus III", J. Faculty Sci. Univ. Tokyo Sect. 1A Math., 34 pp 391–442 Malliavin, Paul and Thalmaier, Anton. Stochastic Calculus of Variations in Mathematical Finance, Springer 2005, Bell, Denis. (2007) The Malliavin Calculus, Dover. ; ebook Sanz-Solé, Marta (2005) Malliavin Calculus, with applications to stochastic partial differential equations. EPFL Press, distributed by CRC Press, Taylor & Francis Group. Schiller, Alex (2009) Malliavin Calculus for Monte Carlo Simulation with Financial Applications. Thesis, Department of Mathematics, Princeton University Øksendal, Bernt K.(1997) An Introduction To Malliavin Calculus With Applications To Economics. Lecture Notes, Dept. of Mathematics, University of Oslo (Zip file containing Thesis and addendum) Di Nunno, Giulia, Øksendal, Bernt, Proske, Frank (2009) "Malliavin Calculus for Lévy Processes with Applications to Finance", Universitext, Springer. External links Lecture Notes, 43 pages Thesis, 100 pages Stochastic calculus Integral calculus Mathematical finance Calculus of variations
Malliavin calculus
[ "Mathematics" ]
1,846
[ "Calculus", "Applied mathematics", "Malliavin calculus", "Mathematical finance", "Integral calculus" ]
840,106
https://en.wikipedia.org/wiki/Master%20equation
In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states. The name was proposed in 1940: Introduction A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable t. The most familiar form of a master equation is a matrix form: where is a column vector, and is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either a d-dimensional system (where d is 1,2,3,...), where any state is connected with exactly its 2d nearest neighbors, or a network, where every pair of states may have a connection (depending on the network's properties). When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state i is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix depends on the time, ), the process is not stationary and the master equation reads When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation: The matrix can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium. Detailed description of the matrix and properties of the system Let be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventional matrix multiplication. For each state k, the increase in occupation probability depends on the contribution from all other states to k, and is given by: where is the probability for the system to be in the state , while the matrix is filled with a grid of transition-rate constants. Similarly, contributes to the occupation of all other states In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation. The master equation can be simplified so that the terms with ℓ = k do not appear in the summation. This allows calculations even if the main diagonal of is not defined or has been assigned an arbitrary value. The final equality arises from the fact that because the summation over the probabilities yields one, a constant function. Since this has to hold for any probability (and in particular for any probability of the form for some k) we get Using this we can write the diagonal elements as The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states k and ℓ having equilibrium probabilities and , These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations. Examples of master equations Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a master equation, thereby performing a great simplification of the problem (see mathematical model). The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a master equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix). Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion. Stochastic chemical kinetics provide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans. Quantum master equations A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical. The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation). Theorem about eigenvalues of the matrix and time evolution Because fulfills and one can show that: There is at least one eigenvector with a vanishing eigenvalue, exactly one if the graph of is strongly connected. All other eigenvalues fulfill . All eigenvectors with a non-zero eigenvalue fulfill . This has important consequences for the time evolution of a state. See also Kolmogorov equations (Markov jump process) Continuous-time Markov process Quantum master equation Fermi's golden rule Detailed balance Boltzmann's H-theorem References External links Timothy Jones, A Quantum Optics Derivation (2006) Statistical mechanics Stochastic calculus Equations Equations of physics
Master equation
[ "Physics", "Mathematics" ]
1,342
[ "Statistical mechanics", "Mathematical objects", "Equations of physics", "Equations" ]
840,577
https://en.wikipedia.org/wiki/Semimetal
A semimetal is a material with a small energy overlap between the bottom of the conduction band and the top of the valence band, but they do not overlap in momentum space. According to electronic band theory, solids can be classified as insulators, semiconductors, semimetals, or metals. In insulators and semiconductors the filled valence band is separated from an empty conduction band by a band gap. For insulators, the magnitude of the band gap is larger (e.g., > 4 eV) than that of a semiconductor (e.g., < 4 eV). Because of the slight overlap between the conduction and valence bands, semimetals have no band gap and a small density of states at the Fermi level. A metal, by contrast, has an appreciable density of states at the Fermi level because the conduction band is partially filled. Temperature dependency The insulating/semiconducting states differ from the semimetallic/metallic states in the temperature dependency of their electrical conductivity. With a metal, the conductivity decreases with increases in temperature (due to increasing interaction of electrons with phonons (lattice vibrations)). With an insulator or semiconductor (which have two types of charge carriers – holes and electrons), both the carrier mobilities and carrier concentrations will contribute to the conductivity and these have different temperature dependencies. Ultimately, it is observed that the conductivity of insulators and semiconductors increase with initial increases in temperature above absolute zero (as more electrons are shifted to the conduction band), before decreasing with intermediate temperatures and then, once again, increasing with still higher temperatures. The semimetallic state is similar to the metallic state but in semimetals both holes and electrons contribute to electrical conduction. With some semimetals, like arsenic and antimony, there is a temperature-independent carrier density below room temperature (as in metals) while, in bismuth, this is true at very low temperatures but at higher temperatures the carrier density increases with temperature giving rise to a semimetal-semiconductor transition. A semimetal also differs from an insulator or semiconductor in that a semimetal's conductivity is always non-zero, whereas a semiconductor has zero conductivity at zero temperature and insulators have zero conductivity even at ambient temperatures (due to a wider band gap). Classification To classify semiconductors and semimetals, the energies of their filled and empty bands must be plotted against the crystal momentum of conduction electrons. According to the Bloch theorem the conduction of electrons depends on the periodicity of the crystal lattice in different directions. In a semimetal, the bottom of the conduction band is typically situated in a different part of momentum space (at a different k-vector) than the top of the valence band. One could say that a semimetal is a semiconductor with a negative indirect bandgap, although they are seldom described in those terms. Classification of a material either as a semiconductor or a semimetal can become tricky when it has extremely small or slightly negative band-gaps. The well-known compound Fe2VAl for example, was historically thought of as a semi-metal (with a negative gap ~ -0.1 eV) for over two decades before it was actually shown to be a small-gap (~ 0.03 eV) semiconductor using self-consistent analysis of the transport properties, electrical resistivity and Seebeck coefficient. Commonly used experimental techniques to investigate band-gap can be sensitive to many things such as the size of the band-gap, electronic structure features (direct versus indirect gap) and also the number of free charge carriers (which can frequently depend on synthesis conditions). Band-gap obtained from transport property modeling is essentially independent of such factors. Theoretical techniques to calculate the electronic structure on the other hand can often underestimate band-gap. Schematic Schematically, the figure shows The figure is schematic, showing only the lowest-energy conduction band and the highest-energy valence band in one dimension of momentum space (or k-space). In typical solids, k-space is three-dimensional, and there are an infinite number of bands. Unlike a regular metal, semimetals have charge carriers of both types (holes and electrons), so that one could also argue that they should be called 'double-metals' rather than semimetals. However, the charge carriers typically occur in much smaller numbers than in a real metal. In this respect they resemble degenerate semiconductors more closely. This explains why the electrical properties of semimetals are partway between those of metals and semiconductors. Physical properties As semimetals have fewer charge carriers than metals, they typically have lower electrical and thermal conductivities. They also have small effective masses for both holes and electrons because the overlap in energy is usually the result of the fact that both energy bands are broad. In addition they typically show high diamagnetic susceptibilities and high lattice dielectric constants. Classic semimetals The classic semimetallic elements are arsenic, antimony, bismuth, α-tin (gray tin) and graphite, an allotrope of carbon. The first two (As, Sb) are also considered metalloids but the terms semimetal and metalloid are not synonymous. Semimetals, in contrast to metalloids, can also be chemical compounds, such as mercury telluride (HgTe), and tin, bismuth, and graphite are typically not considered metalloids. Transient semimetal states have been reported at extreme conditions. It has been recently shown that some conductive polymers can behave as semimetals. See also Charge-transfer insulators Half-metal Hubbard model Metal Mott insulator Nonmetal Solid-state physics References Materials Condensed matter physics Metals
Semimetal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,222
[ "Matter", "Metals", "Phases of matter", "Materials science", "Materials", "Condensed matter physics", "Semimetals" ]
841,262
https://en.wikipedia.org/wiki/Footprint%20%28satellite%29
The footprint of a communications satellite is the ground area that its transponders offer coverage, and determines the satellite dish diameter required to receive each transponder's signal. There is usually a different map for each transponder (or group of transponders), as each may be aimed to cover different areas. Footprint maps usually show either the estimated minimum satellite dish diameter required or the signal strength in each area measured in dBW. External links Links to fleet information and footprints from SES. Links to interactive maps from Intelsat for their fleet of satellites. Links to interactive maps for SES World Skies's fleet of satellites. Link to maps for Russian Satellite Communications Company satellites Link to satellite footprints from SatBeams for Geostationary satellites Satellite footprints as images and on Google Earth Satellites Satellite broadcasting
Footprint (satellite)
[ "Astronomy", "Engineering" ]
164
[ "Satellites", "Telecommunications engineering", "Outer space", "Satellite broadcasting" ]
841,429
https://en.wikipedia.org/wiki/Synthetic%20biology
Synthetic biology (SynBio) is a multidisciplinary field of science that focuses on living systems and organisms, and it applies engineering principles to develop new biological parts, devices, and systems or to redesign existing systems found in nature. It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biochemistry, biotechnology, biomaterials, material science/engineering, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology. It includes designing and constructing biological modules, biological systems, and biological machines, or re-designing existing biological systems for useful purposes. Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes. In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineering paradigm of systems design to biological systems. According to the European Commission, this possibly involves a molecular assembler based on biomolecular systems such as the ribosome. History 1910: First identifiable use of the term synthetic biology in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912. 1944: Canadian-American scientist Oswald Avery shows that DNA is the material of which genes and chromosomes are made. This becomes the bedrock on which all subsequent genetic research is built. 1953: Francis Crick and James Watson publish the structure of the DNA in Nature. 1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components. 1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology. 1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene: 1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al. This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly. 2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells. 2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the International Genetically Engineered Machine (iGEM) competition founded at MIT in the following year. 2003: Researchers engineer an artemisinin precursor pathway in E. coli. 2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT. 2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation. 2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells. 2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination. 2011: Functional synthetic chromosome arms are engineered in yeast. 2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage. This technology greatly simplified and expanded eukaryotic gene editing. 2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist. 2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. 2020: Scientists created the first xenobot, a programmable synthetic organism derived from frog cells and designed by AI. 2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobots. Perspectives It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings. Engineers view biology as technology (in other words, a given system includes biotechnology or its biological engineering). Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems and our environment. Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine. Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market. Synthetic biology currently has no generally accepted definition. Here are a few examples: It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA. Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium. Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level. Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product. On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software. Categories Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology. It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one. Bioengineering The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering", which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems. A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology. By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology. Synthetic genomics The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions. Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology. A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria. Protocell synthetic biology The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein. Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines. Unconventional molecular biology The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars. The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches. A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped. On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms. In silico technique Synthetic biology in silico and the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms. The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article. It is sensible to integrate the five areas under the umbrella of synthetic biology as one unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches. Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration. Four engineering approaches Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up. To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases. Top-down approach It involves using metabolic and genetic engineering techniques to impart new functions to living cells. By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-called Last Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes. Bottom-up approach This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell. Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur. The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimal cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. Recently a cell-free system capable of self-sustaining using CO2 was engineered by bottom-up integrating metabolism with gene expression. This research, which is primarily essential, deserves proper recognition as synthetic biology research. Parallel approach Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA. Orthogonal approach It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected to xenobiology, a newly developed field that combines systems chemistry, synthetic biology, exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose). In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations. Enabling technologies Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD). DNA and gene synthesis Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002, researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003, the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell. In 2007, it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.). This favors a synthesis-from-scratch approach. Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking. Sequencing DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms. Modularity This is the ability of a system or component to operate without reference to its context. The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix). Sequence overlap between two genetic elements (genes or coding sequences), called overlapping genes, can prevent their individual manipulation. To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function" has been adopted across synthetic biology disciplines. Some notable examples of refactoring including the nitrogen fixation cluster and type III secretion system along with bacteriophages T7 and ΦX174. While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils, SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization. In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation. Modeling Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks. Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted. Microfluidics Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them. It is widely employed in screening assays. Synthetic transcription factors Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. Applications Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include: Utilizing microorganisms for bioremediation to remove contaminants from our water, soil, and air. Production of complex natural products that are usually extracted from plants but cannot be obtained in sufficient amounts, e.g. drugs of natural origin, such as artemisinin and paclitaxel. Beta-carotene, a substance typically associated with carrots that prevents vitamin A deficiency, is produced by rice that has been modified. Every year, between 250,000 and 500,000 children lose their vision due to vitamin A deficiency, which also significantly raises their chance of dying from infectious infections. As a sustainable and environmentally benign alternative to the fresh roses that perfumers use to create expensive smells, yeast has been created to produce rose oil. Biosensors A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP). Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used. Biosensors could also be used to detect pathogenic signatures—such as of SARS-CoV-2—and can be wearable. For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules. Food and drink However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are: Genetically engineered microbial food cultures (e.g. for solar-energy-based protein powder) Cell-free artificial synthesis (e.g. synthetic starch; ) Materials Photosynthetic microbial cells have been used as a step to synthetic production of spider silk. Biological computers A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems. Cell transformation Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels. Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin. Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs. By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization. Designed proteins Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar. Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required. Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only nine amino acids were used. Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production". Designed nucleic acid systems Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems. Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides. Space exploration Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth. Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops. Synthetic life One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell. A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome. The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. In May 2019, in a milestone effort, researchers reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. In 2017, the international Build-a-Cell large-scale open-source research collaboration for the construction of synthetic living cells was started, followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative. In 2023, researchers were able to create the first synthetically made human embryos derived from stem cells. Drug delivery platforms In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components. Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity. In the therapy of cancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example, p53 pathway activity in human cells was put into adenoviruses to control how they replicated. Engineered bacteria-based platform Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. Then the bacteria only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves. Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application. Engineered yeast-based platform Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease. A live S. boulardii yeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B of Clostridioides difficile has been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models. The first in human clinical trial of engineered live yeast for the treatment of Clostridioides difficile infection is anticipated in 2024 and will be sponsored by the developer Fzata, Inc. Cell-based platform The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells. T cell receptors were engineered and 'trained' to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. Multiple second generation CAR-based therapies have been approved by FDA. Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics. Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells. Biofuels, pharmaceuticals and biomaterials The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. A substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even beat those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools. CRISPR/Cas9 The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast. Regulatory elements To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design. Organoids Synthetic biology has been used for organoids, which are lab-grown organs with application to medical research and transplantation. Bioprinted organs Other transplants and induced regeneration There is ongoing research and development into synthetic biology based methods for inducing regeneration in humans as well the creation of transplantable artificial organs. Nanoparticles, artificial cells and micro-droplets Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away—from the inside out—portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology. Electrogenetics Mammalian designer cells are engineered by humans to behave a specific way, such as an immune cell that expresses a synthetic receptor designed to combat a specific disease. Electrogenetics is an application of synthetic biology that involves utilizing electrical fields to stimulate a response in engineered cells. Controlling the designer cells can be done with relative ease through the use of common electronic devices, such as smartphones. Additionally, electrogenetics allows for the possibility of creating devices that are much smaller and compact than devices that use other stimulus through the use of microscopic electrodes. One example of how electrogenetics is used to benefit public health is through stimulating designer cells that are able to produce/deliver therapeutics. This was implemented in ElectroHEK cells, cells that contain voltage-gated calcium channels that are electrosensitive, meaning that the ion channel can be controlled by electrical conduction between electrodes and the ElectroHEK cells. The expression levels of the artificial gene that these ElectroHEK cells contained was shown to be able to be controlled by changing the voltage or electrical pulse length. Further studies have expanded on this robust system, one of which is a beta cell line system designed to control the release of insulin based on electric signals. Ethics The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed. Common ethical questions include: Is it morally right to tamper with nature? Is one playing God when creating new life? What happens if a synthetic organism accidentally escapes? What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)? Who will have control of and access to the products of synthetic biology? Who will gain from these innovations? Investors? Medical patients? Industrial farmers? Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans? What if a new creation is deserving of moral or legal status? The ethical aspects of synthetic biology has three main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity. Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.". The "creation" of life One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced. Ethical support for synthetic biology Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventional agriculture (including meat production), animal welfare, food security, and human health, as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life. Biosafety and biocontainment What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild. In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Biosecurity and bioterrorism Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates, and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions. Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat of bioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics like COVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow. Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks." European Union The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms. A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity. COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009. The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry". United States In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology. On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology". After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life". It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact". The proliferation of such technology could also make the production of biological and chemical weapons available to a wider array of state and non-state actors. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Opposition On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment, and the ETC Group, issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome. Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations". Health and safety The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. Synthetic biology is an example of a dual-use technology with the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse. Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology. See also References NHGRI. (2019, March 13). Synthetic Biology. Genome.gov. https://www.genome.gov/about-genomics/policy-issues/Synthetic-Biology Bibliography External links Engineered Pathogens and Unnatural Biological Weapons: The Future Threat of Synthetic Biology . Threats and considerations Synthetic biology books popular science book and textbooks Introductory Summary of Synthetic Biology . Concise overview of synthetic biology concepts, developments and applications Collaborative overview article on Synthetic Biology Controversial DNA startup wants to let customers create creatures (2015-01-03), San Francisco Chronicle It's Alive, But Is It Life: Synthetic Biology and the Future of Creation (28 September 2016), World Science Festival Biotechnology Molecular genetics Systems biology Bioinformatics Biocybernetics Appropriate technology Applications of evolutionary algorithms Bioterrorism
Synthetic biology
[ "Chemistry", "Engineering", "Biology" ]
12,835
[ "Synthetic biology", "Biological engineering", "Biotechnology", "Biological warfare", "Bioinformatics", "Bioterrorism", "Molecular genetics", "nan", "Molecular biology", "Systems biology" ]
841,685
https://en.wikipedia.org/wiki/Tractrix
In geometry, a tractrix (; plural: tractrices) is the curve along which an object moves, under the influence of friction, when pulled on a horizontal plane by a line segment attached to a pulling point (the tractor) that moves at a right angle to the initial line between the object and the puller at an infinitesimal speed. It is therefore a curve of pursuit. It was first introduced by Claude Perrault in 1670, and later studied by Isaac Newton (1676) and Christiaan Huygens (1693). Mathematical derivation Suppose the object is placed at and the puller at the origin, so that is the length of the pulling thread. (In the example shown to the right, the value of is 4.) Suppose the puller starts to move along the axis in the positive direction. At every moment, the thread will be tangent to the curve described by the object, so that it becomes completely determined by the movement of the puller. Mathematically, if the coordinates of the object are , then by the Pythagorean theorem the of the puller is . Writing that the slope of thread equals that of the tangent to the curve leads to the differential equation with the initial condition . Its solution is If instead the puller moves downward from the origin, then the sign should be removed from the differential equation and therefore inserted into the solution. Each of the two solutions defines a branch of the tractrix; they meet at the cusp point . The first term of this solution can also be written where is the inverse hyperbolic secant function. Basis of the tractrix The essential property of the tractrix is constancy of the distance between a point on the curve and the intersection of the tangent line at with the asymptote of the curve. The tractrix might be regarded in a multitude of ways: It is the locus of the center of a hyperbolic spiral rolling (without skidding) on a straight line. It is the involute of the catenary function, which describes a fully flexible, inelastic, homogeneous string attached to two points that is subjected to a gravitational field. The catenary has the equation . The trajectory determined by the middle of the back axle of a car pulled by a rope at a constant speed and with a constant direction (initially perpendicular to the vehicle). It is a (non-linear) curve which a circle of radius rolling on a straight line, with its center at the axis, intersects perpendicularly at all times. The function admits a horizontal asymptote. The curve is symmetrical with respect to the -axis. The curvature radius is . A great implication that the tractrix had was the study of its surface of revolution about its asymptote: the pseudosphere. Studied by Eugenio Beltrami in 1868, as a surface of constant negative Gaussian curvature, the pseudosphere is a local model of hyperbolic geometry. The idea was carried further by Kasner and Newman in their book Mathematics and the Imagination, where they show a toy train dragging a pocket watch to generate the tractrix. Properties The curve can be parameterised by the equation . Due to the geometrical way it was defined, the tractrix has the property that the segment of its tangent, between the asymptote and the point of tangency, has constant length . The arc length of one branch between and is . The area between the tractrix and its asymptote is , which can be found using integration or Mamikon's theorem. The envelope of the normals of the tractrix (that is, the evolute of the tractrix) is the catenary (or chain curve) given by . The surface of revolution created by revolving a tractrix about its asymptote is a pseudosphere. The tractrix is a transcendental curve; it cannot be defined by a polynomial equation. Practical application In 1927, P. G. A. H. Voigt patented a horn loudspeaker design based on the assumption that a wave front traveling through the horn is spherical of a constant radius. The idea is to minimize distortion caused by internal reflection of sound within the horn. The resulting shape is the surface of revolution of a tractrix. Voigt's design removed the annoying "honk" characteristic from previous horn designs, especially conical horns, and thus revitalized interest in the horn loudspeaker. Klipsch Audio Technologies has used the tractrix design for the great majority of their loudspeakers, and many loudspeaker designers returned to the tractrix in the 21st century, creating an audiophile market segment. The tractrix horn differs from the more common exponential horn in that it provides for a wider spread of high frequency energy, and it supports the lower frequencies more strongly. An important application is in the forming technology for sheet metal. In particular a tractrix profile is used for the corner of the die on which the sheet metal is bent during deep drawing. A toothed belt-pulley design provides improved efficiency for mechanical power transmission using a tractrix catenary shape for its teeth. This shape minimizes the friction of the belt teeth engaging the pulley, because the moving teeth engage and disengage with minimal sliding contact. Original timing belt designs used simpler trapezoidal or circular tooth shapes, which cause significant sliding and friction. Drawing machines In October–November 1692, Christiaan Huygens described three tractrix-drawing machines. In 1693 Gottfried Wilhelm Leibniz devised a "universal tractional machine" which, in theory, could integrate any first order differential equation. The concept was an analog computing mechanism implementing the tractional principle. The device was impractical to build with the technology of Leibniz's time, and was never realized. In 1706 John Perks built a tractional machine in order to realise the hyperbolic quadrature. In 1729 Giovanni Poleni built a tractional device that enabled logarithmic functions to be drawn. A history of all these machines can be seen in an article by H. J. M. Bos. See also Dini's surface Hyperbolic functions for , , , Natural logarithm for Sign function for Trigonometric functions for , , , , Notes References External links Tractrix on MathWorld Module: Leibniz's Pocket Watch ODE at PHASER Plane curves Mathematical physics
Tractrix
[ "Physics", "Mathematics" ]
1,313
[ "Plane curves", "Applied mathematics", "Theoretical physics", "Euclidean plane geometry", "Planes (geometry)", "Mathematical physics" ]
841,689
https://en.wikipedia.org/wiki/Pullback%20%28category%20theory%29
In category theory, a branch of mathematics, a pullback (also called a fiber product, fibre product, fibered product or Cartesian square) is the limit of a diagram consisting of two morphisms and with a common codomain. The pullback is written . Usually the morphisms and are omitted from the notation, and then the pullback is written . The pullback comes equipped with two natural morphisms and . The pullback of two morphisms and need not exist, but if it does, it is essentially uniquely defined by the two morphisms. In many situations, may intuitively be thought of as consisting of pairs of elements with in , in , and . For the general definition, a universal property is used, which essentially expresses the fact that the pullback is the "most general" way to complete the two given morphisms to a commutative square. The dual concept of the pullback is the pushout. Universal property Explicitly, a pullback of the morphisms and consists of an object and two morphisms and for which the diagram commutes. Moreover, the pullback must be universal with respect to this diagram. That is, for any other such triple where and are morphisms with , there must exist a unique such that This situation is illustrated in the following commutative diagram. As with all universal constructions, a pullback, if it exists, is unique up to isomorphism. In fact, given two pullbacks and of the same cospan , there is a unique isomorphism between and respecting the pullback structure. Pullback and product The pullback is similar to the product, but not the same. One may obtain the product by "forgetting" that the morphisms and exist, and forgetting that the object exists. One is then left with a discrete category containing only the two objects and , and no arrows between them. This discrete category may be used as the index set to construct the ordinary binary product. Thus, the pullback can be thought of as the ordinary (Cartesian) product, but with additional structure. Instead of "forgetting" , , and , one can also "trivialize" them by specializing to be the terminal object (assuming it exists). and are then uniquely determined and thus carry no information, and the pullback of this cospan can be seen to be the product of and . Examples Commutative rings In the category of commutative rings (with identity), the pullback is called the fibered product. Let , , and be commutative rings (with identity) and and (identity preserving) ring homomorphisms. Then the pullback of this diagram exists and is given by the subring of the product ring defined by along with the morphisms given by and for all . We then have Groups and modules In complete analogy to the example of commutative rings above, one can show that all pullbacks exist in the category of groups and in the category of modules over some fixed ring. Sets In the category of sets, the pullback of functions and always exists and is given by the set together with the restrictions of the projection maps and to . Alternatively one may view the pullback in asymmetrically: where is the disjoint union of sets (the involved sets are not disjoint on their own unless resp. is injective). In the first case, the projection extracts the index while forgets the index, leaving elements of . This example motivates another way of characterizing the pullback: as the equalizer of the morphisms where is the binary product of and and and are the natural projections. This shows that pullbacks exist in any category with binary products and equalizers. In fact, by the existence theorem for limits, all finite limits exist in a category with binary products and equalizers; equivalently, all finite limits exist in a category with terminal object and pullbacks (by the fact that binary product = pullback on the terminal object, and that an equalizer is a pullback involving binary product). Graphs of functions A specific example of a pullback is given by the graph of a function. Suppose that is a function. The graph of is the set The graph can be reformulated as the pullback of and the identity function on . By definition, this pullback is and this equals . Fiber bundles Another example of a pullback comes from the theory of fiber bundles: given a bundle map and a continuous map , the pullback (formed in the category of topological spaces with continuous maps) is a fiber bundle over called the pullback bundle. The associated commutative diagram is a morphism of fiber bundles. This is also the case in the category of differentiable manifolds. A special case is the pullback of two fiber bundles . In this case is a fiber bundle over , and pulling back along the diagonal map gives a space homeomorphic (diffeomorphic) to , which is a fiber bundle over . The pullback of two smooth transverse maps into the same differentiable manifold is also a differentiable manifold, and the tangent space of the pullback is the pullback of the tangent spaces along the differential maps. Preimages and intersections Preimages of sets under functions can be described as pullbacks as follows: Suppose , . Let be the inclusion map . Then a pullback of and (in ) is given by the preimage together with the inclusion of the preimage in and the restriction of to . Because of this example, in a general category the pullback of a morphism and a monomorphism can be thought of as the "preimage" under of the subobject specified by . Similarly, pullbacks of two monomorphisms can be thought of as the "intersection" of the two subobjects. Least common multiple Consider the multiplicative monoid of positive integers as a category with one object. In this category, the pullback of two positive integers and is just the pair , where the numerators are both the least common multiple of and . The same pair is also the pushout. Properties In any category with a terminal object , the pullback is just the ordinary product . Monomorphisms are stable under pullback: if the arrow in the diagram is monic, then so is the arrow . Similarly, if is monic, then so is . Isomorphisms are also stable, and hence, for example, for any map (where the implied map is the identity). In an abelian category all pullbacks exist, and they preserve kernels, in the following sense: if is a pullback diagram, then the induced morphism is an isomorphism, and so is the induced morphism . Every pullback diagram thus gives rise to a commutative diagram of the following form, where all rows and columns are exact: Furthermore, in an abelian category, if is an epimorphism, then so is its pullback , and symmetrically: if is an epimorphism, then so is its pullback . In these situations, the pullback square is also a pushout square. There is a natural isomorphism (A×CB)×B D ≅ A×CD. Explicitly, this means: if maps f : A → C, g : B → C and h : D → B are given and the pullback of f and g is given by r : P → A and s : P → B, and the pullback of s and h is given by t : Q → P and u : Q → D , then the pullback of f and gh is given by rt : Q → A and u : Q → D. Graphically this means that two pullback squares, placed side by side and sharing one morphism, form a larger pullback square when ignoring the inner shared morphism. Any category with pullbacks and products has equalizers. Weak pullbacks A weak pullback of a cospan is a cone over the cospan that is only weakly universal, that is, the mediating morphism above is not required to be unique. See also Pullbacks in differential geometry Equijoin in relational algebra Fiber product of schemes Notes References Adámek, Jiří, Herrlich, Horst, & Strecker, George E.; (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. . (now free on-line edition). Cohn, Paul M.; Universal Algebra (1981), D. Reidel Publishing, Holland, (Originally published in 1965, by Harper & Row). External links Interactive web page which generates examples of pullbacks in the category of finite sets. Written by Jocelyn Paine. Limits (category theory)
Pullback (category theory)
[ "Mathematics" ]
1,812
[ "Mathematical structures", "Category theory", "Limits (category theory)" ]
282,299
https://en.wikipedia.org/wiki/Muzzle%20velocity
Muzzle velocity is the speed of a projectile (bullet, pellet, slug, ball/shots or shell) with respect to the muzzle at the moment it leaves the end of a gun's barrel (i.e. the muzzle). Firearm muzzle velocities range from approximately to in black powder muskets, to more than in modern rifles with high-velocity cartridges such as the .220 Swift and .204 Ruger, all the way to for tank guns firing kinetic energy penetrator ammunition. To simulate orbital debris impacts on spacecraft, NASA launches projectiles through light-gas guns at speeds up to . FPS (feet per second) and MPH (miles per hour) are the most common American measurements for bullets. Several factors, including the type of firearm, the cartridge, and the barrel length, determine the bullet's muzzle velocity. Projectile velocity For projectiles in unpowered flight, its velocity is highest at leaving the muzzle and drops off steadily because of air resistance. Projectiles traveling less than the speed of sound (about in dry air at sea level) are subsonic, while those traveling faster are supersonic and thus can travel a substantial distance and even hit a target before a nearby observer hears the "bang" of the shot. Projectile speed through air depends on a number of factors such as barometric pressure, humidity, air temperature and wind speed. Some high-velocity small arms have muzzle velocities higher than the escape speeds of some Solar System bodies such as Pluto and Ceres, meaning that a bullet fired from such a gun on the surface of the body would leave its gravitational field; however, no arms are known with muzzle velocities that can overcome Earth's gravity (and atmosphere) or those of the other planets or the Moon. While traditional cartridges cannot generally achieve a Lunar escape speed (approximately ) or higher due to modern limitations of action and propellant, a projectile was accelerated to velocities exceeding at Sandia National Laboratories in 1994. The gun operated in two stages. First, burning gunpowder was used to drive a piston to pressurize hydrogen to . The pressurized gas was then released to a secondary piston, which traveled forward into a shock-absorbing "pillow", transferring the energy from the piston to the projectile on the other side of the pillow. This discovery might indicate that future projectile velocities exceeding have to have a charging, gas-operated action that transfers the energy, rather than a system that uses primer, gunpowder, and a fraction of the released gas. A .22 LR cartridge is approximately three times the mass of the projectile in question. This may be another indication that future arms developments will take more interest in smaller caliber rounds, especially due to modern limitations such as metal usage, cost, and cartridge design. In a side-by-side comparison with the .50 BMG (43 g), the titanium round of any caliber released almost 2.8 times the energy of the .50 BMG with only a 27% mean loss in momentum. Energy, in most cases, is what is lethal to the target, not momentum. Conventional guns In conventional guns, muzzle velocity is determined by the quantity of the propellant, its quality (in terms of chemical burn speed and expansion), the mass of the projectile, and the length of the barrel. A slower-burning propellant needs a longer barrel to finish its burn before leaving, but conversely can use a heavier projectile. This is a mathematical tradeoff. A faster-burning propellant may accelerate a lighter projectile to higher speeds if the same amount of propellant is used. Within a gun, the gaseous pressure created as a result of the combustion process is a limiting factor on projectile velocity. Consequently, propellant quality and quantity, projectile mass, and barrel length must all be balanced to achieve safety and to optimize performance. Longer barrels give the propellant force more time to work on propelling the bullet. For this reason longer barrels generally provide higher velocities, everything else being equal. As the bullet moves down the bore, however, the propellant's gas pressure behind it diminishes. Given a long enough barrel, there would eventually be a point at which friction between the bullet and the barrel, and air resistance, would equal the force of the gas pressure behind it, and from that point, the velocity of the bullet would decrease. Rifles Rifled barrels have spiral twists carved inside them that spin the bullet so that it remains stable in flight, in the same way an American football thrown in a spiral will fly in a straight, stable manner. This mechanism is known as rifling. Longer barrels provide more opportunity to rotate the bullet before it leaves the gun. Provided there's enough rifling in the barrel to adequately stabilize a particular round, there is no appreciable increase in precision with increasing barrel length. Longer barrels make it easier to aim if using iron sights, because of the longer sight radius, and with the right propellant load they can increase muzzle velocity, which gives a flatter trajectory and reduces the need to adjust for range. A bullet, while moving through its barrel, is being pushed forward by the gas expanding behind it. This gas was created following the trigger being pulled, causing the firing pin to strike the primer, which in turn ignited the solid propellant packed inside the bullet cartridge, making it combust while situated in the chamber. Once it leaves the barrel, the force of the expanding gas ceases to propel the bullet forth. When a bullet is fired from a handgun with a barrel, the bullet only has a "runway" to be spun before it leaves the barrel. Likewise, it has only a space in which to accelerate before it must fly without any additional force behind it. In some instances, the powder may not have even been fully burned in guns with short barrels. So, the muzzle velocity of a barrel is less than that of a barrel, which is less than that of a barrel. Large naval guns will have high length-to-diameter ratios, ranging between 38:1 to 50:1. This length ratio maximizes the projectile velocity. There is much interest in modernizing naval weaponry by using electrically powered railguns, which shoot projectiles using an electromagnetic pulse. These overcome the limitations noted above. With these railguns, a constant acceleration is provided along the entire length of the device by means of the electromagnetic pulse. This greatly increases the muzzle velocity. Another significant advantage of railguns is not requiring explosive propellant. The result of this is that a ship will not need to transport propellant and that a land-station will not have to maintain an inventory of it either. Explosive propellant, stored in large quantities, is susceptible to explosion. While this can be mitigated with safety precautions, railguns eschew the need for such measures altogether. Even the projectile's internal charges may be eliminated due to the already high velocity. This means the projectile becomes a strictly kinetic weapon. Categories of velocity The United States Army defines different categories of muzzle velocity for different classes of weapons: See also Firearm Gun chronograph Internal ballistics Muzzle energy References Ammunition Ballistics
Muzzle velocity
[ "Physics" ]
1,465
[ "Applied and interdisciplinary physics", "Ballistics" ]
282,350
https://en.wikipedia.org/wiki/Hygrometer
A hygrometer is an instrument which measures the humidity of air or some other gas: that is, how much of it is water vapor. Humidity measurement instruments usually rely on measurements of some other quantities such as temperature, pressure, mass, and mechanical or electrical changes in a substance as moisture is absorbed. By calibration and calculation, these measured quantities can be used to indicate the humidity. Modern electronic devices use the temperature of condensation (called the dew point), or they sense changes in electrical capacitance or resistance. The maximum amount of water vapor that can be present in a given volume of air (saturation) varies greatly by temperature; cold air can contain a lower mass of water per unit volume than hot air. Thus a change in the temperature can change the humidity. A prototype hygrometer was invented by Leonardo da Vinci in 1480. Major improvements occurred during the 1600s; Francesco Folli invented a more practical version of the device, and Robert Hooke improved a number of meteorological devices including the hygrometer. A more modern version was created by Swiss polymath Johann Heinrich Lambert in 1755. Later, in the year 1783, Swiss physicist and geologist Horace Bénédict de Saussure invented a hygrometer that uses a stretched human hair as its sensor. In the late 17th century, some scientists called humidity-measuring instruments hygroscopes; that word is no longer in use, but hygroscopic and hygroscopy, which derive from it, still are. Classical hygrometer Ancient hygrometers Crude hygrometers were devised and developed during the Shang dynasty in Ancient China to study weather. The Chinese used a bar of charcoal and a lump of earth: its dry weight was taken, then compared with its damp weight after being exposed in the air. The differences in weight were used to tally the humidity level. Other techniques were applied using mass to measure humidity, such as when the air was dry, the bar of charcoal would be light, while when the air was humid, the bar of charcoal would be heavy. By hanging a lump of earth on one end of a staff and a bar of charcoal on the other end, and attaching a fixed lifting string to the middle point to make the staff horizontal in dry air, an ancient hygrometer was made. Metal-paper coil type The metal-paper coil hygrometer is very useful for giving a dial indication of humidity changes. It appears most often in inexpensive devices, and its accuracy is limited, with variations of 10% or more. In these devices, water vapor is absorbed by a salt-impregnated paper strip attached to a metal coil, causing the coil to change shape. These changes (analogous to those in a bimetallic thermometer) cause an indication on a dial. There is usually a metal needle on the front of the gauge that will change where it points to. Hair tension hygrometers These devices use a human or animal hair under some tension. (Whalebone and other materials may be used in place of hair.) The hair is hygroscopic (tending toward retaining moisture); its length changes with humidity, and the length change may be magnified by a mechanism and indicated on a dial or scale. Swiss physicist and geologist Horace Bénédict de Saussure was the first to build such a hygrometer, in 1783. The traditional folk art device known as a weather house also works on this principle. The pulley is connected to an index which moves over a graduated scale (e). The instrument can be made more sensitive by removing oils from the hair, such as by first soaking the hair in diethyl ether. Psychrometer (wet-and-dry-bulb thermometer) A psychrometer, or a wet and dry-bulb thermometer, consists of two calibrated thermometers, one that is dry and one that is kept moist with distilled water on a sock or wick. At temperatures above the freezing point of water, evaporation of water from the wick lowers the temperature, such that the wet-bulb thermometer will be at a lower temperature than that of the dry-bulb thermometer. When the air temperature is below freezing, however, the wet-bulb must be covered with a thin coating of ice, in order to be accurate. As a result of the heat of sublimation, the wet-bulb temperature will eventually be lower than the dry bulb, although this may take many minutes of continued use of the psychrometer. Relative humidity (RH) is computed from the ambient temperature, shown by the dry-bulb thermometer and the difference in temperatures as shown by the wet-bulb and dry-bulb thermometers. Relative humidity can also be determined by locating the intersection of the wet and dry-bulb temperatures on a psychrometric chart. The dry and wet thermometers coincide when the air is fully saturated, and the greater the difference the drier the air. Psychrometers are commonly used in meteorology, and in the heating, ventilation, and air conditioning (HVAC) industry for proper refrigerant charging of residential and commercial air conditioning systems. Sling psychrometer A sling psychrometer, which uses thermometers attached to a handle is manually spun in free air flow until both temperatures stabilize. This is sometimes used for field measurements, but is being replaced by more convenient electronic sensors. A whirling psychrometer uses the same principle, but the two thermometers are fitted into a device that resembles a ratchet or football rattle. Chilled mirror dew point hygrometer Dew point is the temperature at which a sample of moist air (or any other water vapor) at constant pressure reaches water vapor saturation. At this saturation temperature, further cooling results in condensation of water. Chilled mirror dewpoint hygrometers are some of the most precise instruments commonly available. They use a chilled mirror and optoelectronic mechanism to detect condensation on the mirror's surface. The temperature of the mirror is controlled by electronic feedback to maintain a dynamic equilibrium between evaporation and condensation, thus closely measuring the dew point temperature. An accuracy of 0.2 °C is attainable with these devices, which correlates at typical office environments to a relative humidity accuracy of about ±1.2%. Older chilled-mirrors used a metallic mirror that needed cleaning and skilled labor. Newer implementations of chilled-mirrors use highly polished surfaces that do not require routine cleaning. More recently, spectroscopic chilled-mirrors have been introduced. Using this method, the dew point is determined with spectroscopic light detection which ascertains the nature of the condensation. This method avoids many of the pitfalls of the previous chilled-mirrors and is capable of operating drift free. Chilled-mirrors remain the reference measurement for calibration of other hygrometers. This is due to their fundamental first-principle nature that refers to the core of condensation physics and measures temperature, which is one of the base quantities of the International System of Quantities (length, time, amount of substance, electric current, temperature, luminous intensity, mass). Modern hygrometers Capacitive When cost, space, or fragility are important, other types of electronic sensors are used, at the price of lower accuracy. Capacitive hygrometers measure the effect of humidity on the dielectric constant of a polymer or a metal oxide. When calibrated, their accuracy at relative humidities between 5% and 95% is ±2% RH; uncalibrated, this is two to three times worse. Capacitive sensors are robust against effects such as condensation and temporary high temperatures, but subject to contamination, drift and aging effects. They are, however, suitable for many applications. Resistive In resistive hygrometers, the change in electrical resistance of a material due to humidity is measured. Typical materials are salts and conductive polymers. Resistive sensors are less sensitive than capacitive sensors – the change in material properties is less, so they require more complex circuitry. The material properties also tend to depend both on humidity and temperature, which means in practice that the sensor must be combined with a temperature sensor. The accuracy and robustness against condensation vary depending on the chosen resistive material. Robust, condensation-resistant sensors exist with an accuracy of up to ±3% RH (relative humidity). Thermal In thermal hygrometers, the change in thermal conductivity of air due to humidity is measured. These sensors measure absolute humidity rather than relative humidity. Gravimetric A gravimetric hygrometer extracts the water from the air (or other gas) and weighs it separately, for example by weighing a desiccant before and after it has absorbed the water. The temperature, pressure and volume of the resulting dry gas are also measured, providing enough information to calculate the amount of water per mole of gas. This is considered the most accurate primary method of measuring absolute humidity, and national standards based on it have been developed in US, UK, EU and Japan. However, the inconvenience of using such devices means they are usually only used to calibrate less accurate instruments, called Transfer Standards. Optical An optical hygrometer measures the absorption of light by water in the air. A light emitter and a light detector are arranged with a volume of air between them. The attenuation of the light, as seen by the detector, indicates the humidity, according to the Beer–Lambert law. Types include the Lyman-alpha hygrometer (using Lyman-alpha light emitted by hydrogen), the krypton hygrometer (using 123.58 nm light emitted by krypton), and the differential absorption hygrometer (using light emitted by two lasers operating at different wavelengths, one absorbed by humidity and the other not). Applications Aside from greenhouses and industrial spaces, hygrometers are also used in some incubators, saunas, humidors and museums. They are also used in the care of wooden musical instruments such as pianos, guitars, violins, and harps which can be damaged by improper humidity conditions. Hygrometers play a big part in firefighting as the lower the relative humidity, the more vigorously fuels may burn. In residential settings, hygrometers are used to assist in humidity control (too low humidity can damage human skin and body, while too high humidity favors growth of mildew and dust mite). Hygrometers are also used in the coating industry because the application of paint and other coatings may be very sensitive to humidity and dew point. Difficulty of accurate humidity measurement Humidity measurement is among the most difficult problems in basic metrology. According to the WMO Guide, "The achievable accuracies [for humidity determination] listed in the table refer to good quality instruments that are well operated and maintained. In practice, these are not easy to achieve." Two thermometers can be compared by immersing them both in an insulated vessel of water (or alcohol, for temperatures below the freezing point of water) and stirring vigorously to minimize temperature variations. A high-quality liquid-in-glass thermometer if handled with care should remain stable for some years. Hygrometers must be calibrated in air, which is a much less effective heat transfer medium than is water, and many types are subject to drift so need regular recalibration. A further difficulty is that most hygrometers sense relative humidity rather than the absolute amount of water present, but relative humidity is a function of both temperature and absolute moisture content, so small temperature variations within the air in a test chamber will translate into relative humidity variations. In a cold and humid environment, sublimation of ice may occur on the sensor head, whether it is a hair, dew cell, mirror, capacitance sensing element, or dry-bulb thermometer of an aspiration psychrometer. The ice on the probe matches the reading to the saturation humidity with respect to ice at that temperature, i.e. the frost point. However, a conventional hygrometer is unable to measure properly under the frost point, and the only way to go around this fundamental problem is to use a heated humidity probe. Calibration standards Psychrometer calibration Accurate calibration of the thermometers used is fundamental to precise humidity determination by the wet-dry method. The thermometers must be protected from radiant heat and must have a sufficiently high flow of air over the wet bulb for the most accurate results. One of the most precise types of wet-dry bulb psychrometer was invented in the late 19th century by Adolph Richard Assmann (1845–1918); in English-language references the device is usually spelled "Assmann psychrometer." In this device, each thermometer is suspended within a vertical tube of polished metal, and that tube is in turn suspended within a second metal tube of slightly larger diameter; these double tubes serve to isolate the thermometers from radiant heating. Air is drawn through the tubes with a fan that is driven by a clockwork mechanism to ensure a consistent speed (some modern versions use an electric fan with electronic speed control). According to Middleton, 1966, "an essential point is that air is drawn between the concentric tubes, as well as through the inner one." It is very challenging, particularly at low relative humidity, to obtain the maximal theoretical depression of the wet-bulb temperature; an Australian study in the late 1990s found that liquid-in-glass wet-bulb thermometers were warmer than theory predicted even when considerable precautions were taken; these could lead to RH value readings that are 2 to 5 percent points too high. One solution sometimes used for accurate humidity measurement when the air temperature is below freezing is to use a thermostatically controlled electric heater to raise the temperature of outside air to above freezing. In this arrangement, a fan draws outside air past (1) a thermometer to measure the ambient dry-bulb temperature, (2) the heating element, (3) a second thermometer to measure the dry-bulb temperature of the heated air, then finally (4) a wet-bulb thermometer. According to the World Meteorological Organization Guide, "The principle of the heated psychrometer is that the water vapor content of an air mass does not change if it is heated. This property may be exploited to the advantage of the psychrometer by avoiding the need to maintain an ice bulb under freezing conditions.". Since the humidity of the ambient air is calculated indirectly from three temperature measurements, in such a device accurate thermometer calibration is even more important than for a two-bulb configuration. Saturated salt calibration Various researchers have investigated the use of saturated salt solutions for calibrating hygrometers. Slushy mixtures of certain pure salts and distilled water have the property that they maintain an approximately constant humidity in a closed container. A saturated table salt (sodium chloride) bath will eventually give a reading of approximately 75%. Other salts have other equilibrium humidity levels: Lithium chloride ~11%; Magnesium chloride ~33%; Potassium carbonate ~43%; Potassium sulfate ~97%. Salt solutions will vary somewhat in humidity with temperature and they can take relatively long times to come to equilibrium, but their ease of use compensates somewhat for these disadvantages in low precision applications, such as checking mechanical and electronic hygrometers. See also Automated airport weather station Dewcell Friar of the weather Humidistat Moisture analysis Soil moisture sensor References External links IMA moisture measurement training site USATODAY.com: How a Sling Psychrometer Works NIST page on humidity calibration Article on difficulty of humidity calibration Article on RH sensors NOAA homepage for cryogenic chilled-mirror frostpoint hygrometers Atmospheric thermodynamics Chinese inventions Italian inventions Meteorological instrumentation and equipment Navigational equipment Psychrometrics Swiss inventions Hydrology instrumentation Sensors
Hygrometer
[ "Technology", "Engineering", "Environmental_science" ]
3,371
[ "Hydrology", "Meteorological instrumentation and equipment", "Hydrology instrumentation", "Measuring instruments", "Sensors" ]
282,998
https://en.wikipedia.org/wiki/Prism%20%28optics%29
An optical prism is a transparent optical element with flat, polished surfaces that are designed to refract light. At least one surface must be angled — elements with two parallel surfaces are not prisms. The most familiar type of optical prism is the triangular prism, which has a triangular base and rectangular sides. Not all optical prisms are geometric prisms, and not all geometric prisms would count as an optical prism. Prisms can be made from any material that is transparent to the wavelengths for which they are designed. Typical materials include glass, acrylic and fluorite. A dispersive prism can be used to break white light up into its constituent spectral colors (the colors of the rainbow) to form a spectrum as described in the following section. Other types of prisms noted below can be used to reflect light, or to split light into components with different polarizations. Types Dispersive Dispersive prisms are used to break up light into its constituent spectral colors because the refractive index depends on wavelength; the white light entering the prism is a mixture of different wavelengths, each of which gets bent slightly differently. Blue light is slowed more than red light and will therefore be bent more than red light. Triangular prism Amici prism and other types of compound prisms Littrow prism with mirror on its rear facet Pellin–Broca prism Abbe prism Grism, a dispersive prism with a diffraction grating on its surface Féry prism Spectral dispersion is the best known property of optical prisms, although not the most frequent purpose of using optical prisms in practice. Reflective Reflective prisms are used to reflect light, in order to flip, invert, rotate, deviate or displace the light beam. They are typically used to erect the image in binoculars or single-lens reflex cameras – without the prisms the image would be upside down for the user. Reflective prisms use total internal reflection to achieve near-perfect reflection of light that strikes the facets at a sufficiently oblique angle. Prisms are usually made of optical glass which, combined with anti-reflective coating of input and output facets, leads to significantly lower light loss than metallic mirrors. Odd number of reflections, image projects as flipped (mirrored) triangular prism reflector, projects image sideways (chromatic dispersion is zero in case of perpendicular input and output incidence) Roof pentaprism projects image sideways flipped along the other axis Dove prism projects image forward Corner-cube retroreflector projects image backwards Even number of reflections, image projects upright (without change in handedness; may or may not be rotated) Porro prism projects image backwards and displaced Porro–Abbe prism projects image forward, rotated by 180° and displaced Perger prism a development based on the Porro–Abbe prism, projects image forward, rotated by 180° and displaced Abbe–Koenig prism projects image forward, rotated by 180° and collinear (4 internal reflections [2 reflections are on roof plains]) Bauernfeind prism projects image sideways (inclined by 45°) Amici roof prism projects image sideways Pentaprism projects image sideways Schmidt–Pechan prism projects image forward, rotated by 180° (6 reflections [2 reflections are on roof plains]; composed of Bauernfeind part and Schmidt part) Uppendahl prism projects image forward, rotated by 180° and collinear (6 reflections [2 reflections are on roof plains]); composed of 3 prisms cemented together) Beam-splitting Various thin-film optical layers can be deposited on the hypotenuse of one right-angled prism, and cemented to another prism to form a beam-splitter cube. Overall optical performance of such a cube is determined by the thin layer. In comparison with a usual glass substrate, the glass cube provides protection of the thin-film layer from both sides and better mechanical stability. The cube can also eliminate etalon effects, back-side reflection and slight beam deflection. dichroic color filters form a dichroic prism Polarizing cube beamsplitters have lower extinction ratio than birefringent ones, but less expensive Partially-metallized mirrors provide non-polarizing beamsplitters Air gap − When hypotenuses of two triangular prisms are stacked very close to each other with air gap, frustrated total internal reflection in one prism makes it possible to couple part of the radiation into a propagating wave in the second prism. The transmitted power drops exponentially with the gap width, so it can be tuned over many orders of magnitude by a micrometric screw. Biprism (or Fresnel biprism): two prisms joined at their bases, forming a wide vertex angle (~ 180°); used in common-path interferometry. Polarizing Another class is formed by polarizing prisms which use birefringence to split a beam of light into components of varying polarization. In the visible and UV regions, they have very low losses and their extinction ratio typically exceeds , which is superior to other types of polarizers. They may or may not employ total internal reflection; One polarization is separated by total internal reflection: Nicol prism Glan–Foucault prism Glan–Taylor prism, a high-power variant of which is also denoted as Glan–laser prism Glan–Thompson prism One polarization is deviated by different refraction only: Rochon prism Sénarmont prism Both polarizations are deviated by refraction: Wollaston prism Nomarski prism – a variant of the Wollaston prism where p- and s-components emerge displaced and converging towards each other; important for differential interference contrast microscopy Both polarizations stay parallel, but are spatially separated: polarisation beam displacers, typically made of thick anisotropic crystal with plan-parallel facets These are typically made of a birefringent crystalline material like calcite, but other materials like quartz and α-BBO may be necessary for UV applications, and others (, and ) will extend transmission farther into the infrared spectral range. Prisms made of isotropic materials like glass will also alter polarization of light, as partial reflection under oblique angles does not maintain the amplitude ratio (nor phase) of the s- and p-polarized components of the light, leading to general elliptical polarization. This is generally an unwanted effect of dispersive prisms. In some cases this can be avoided by choosing prism geometry which light enters and exits under perpendicular angle, by compensation through non-planar light trajectory, or by use of p-polarized light. Total internal reflection alters only the mutual phase between s- and p-polarized light. Under well chosen angle of incidence, this phase is close to . Fresnel rhomb uses this effect to achieve conversion between circular and linear polarisation. This phase difference is not explicitly dependent on wavelength, but only on refractive index, so Fresnel rhombs made of low-dispersion glasses achieve much broader spectral range than quarter-wave plates. They displace the beam, however. Doubled Fresnel rhomb, with quadruple reflection and zero beam displacement, substitutes a half-wave plate. Similar effect can also be used to make a polarization-maintaining optics. Depolarizers Birefringent crystals can be assembled in a way that leads to apparent depolarization of the light. Cornu depolarizer Lyot depolarizer Depolarization would not be observed for an ideal monochromatic plane wave, as actually both devices turn reduced temporal coherence or spatial coherence, respectively, of the beam into decoherence of its polarization components. Other uses Total internal reflection in prisms finds numerous uses through optics, plasmonics and microscopy. In particular: Prisms are used to couple propagating light to surface plasmons. Either the hypotenuse of a triangular prism is metallized (Kretschmann configuration), or evanescent wave is coupled to very close metallic surface (Otto configuration). Some laser active media can be formed as a prism where the low-quality pump beam enters the front facet, while the amplified beam undergoes total internal reflection under grazing incidence from it. Such a design suffers less from thermal stress and is easy to be pumped by high-power laser diodes. Other uses of prisms are based on their beam-deviating refraction: Wedge prisms are used to deflect a beam of monochromatic light by a fixed angle. A pair of such prisms can be used for beam steering; by rotating the prisms the beam can be deflected into any desired angle within a conical "field of regard". The most commonly found implementation is a Risley prism pair. Transparent windows of, e.g., vacuum chambers or cuvettes can also be slightly wedged (10' − 1°). While this does not reduce reflection, it suppresses Fabry-Pérot interferences that would otherwise modulate their transmission spectrum. Anamorphic pair of similar, but asymmetrically placed prisms can also change the profile of a beam. This is often used to make a round beam from the elliptical output of a laser diode. With its monochromatic light, slight chromatic dispersion arising from different wedge inclination is not a problem. Deck prisms were used on sailing ships to bring daylight below deck, since candles and kerosene lamps are a fire hazard on wooden ships. In optometry By shifting corrective lenses off axis, images seen through them can be displaced in the same way that a prism displaces images. Eye care professionals use prisms, as well as lenses off axis, to treat various orthoptics problems: Diplopia (double vision) Positive and negative fusion problems Prism spectacles with a single prism perform a relative displacement of the two eyes, thereby correcting eso-, exo, hyper- or hypotropia. In contrast, spectacles with prisms of equal power for both eyes, called yoked prisms (also: conjugate prisms, ambient lenses or performance glasses) shift the visual field of both eyes to the same extent. See also Minimum deviation Multiple-prism dispersion theory Prism compressor Prism dioptre Prism spectrometer Prism (geometry) Theory of Colours Triangular prism (geometry) Superprism Eyeglass prescription Prism lighting References Further reading External links Java applet of refraction through a prism Optical components
Prism (optics)
[ "Materials_science", "Technology", "Engineering" ]
2,183
[ "Glass engineering and science", "Optical components", "Components" ]
283,810
https://en.wikipedia.org/wiki/Mass%20spectrometry
Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures. A mass spectrum is a type of plot of the ion signal as a function of the mass-to-charge ratio. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds. In a typical MS procedure, a sample, which may be solid, liquid, or gaseous, is ionized, for example by bombarding it with a beam of electrons. This may cause some of the sample's molecules to break up into positively charged fragments or simply become positively charged without fragmenting. These ions (fragments) are then separated according to their mass-to-charge ratio, for example by accelerating them and subjecting them to an electric or magnetic field: ions of the same mass-to-charge ratio will undergo the same amount of deflection. The ions are detected by a mechanism capable of detecting charged particles, such as an electron multiplier. Results are displayed as spectra of the signal intensity of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses (e.g. an entire molecule) to the identified masses or through a characteristic fragmentation pattern. History of the mass spectrometer In 1886, Eugen Goldstein observed rays in gas discharges under low pressure that traveled away from the anode and through channels in a perforated cathode, opposite to the direction of negatively charged cathode rays (which travel from cathode to anode). Goldstein called these positively charged anode rays "Kanalstrahlen"; the standard translation of this term into English is "canal rays". Wilhelm Wien found that strong electric or magnetic fields deflected the canal rays and, in 1899, constructed a device with perpendicular electric and magnetic fields that separated the positive rays according to their charge-to-mass ratio (Q/m). Wien found that the charge-to-mass ratio depended on the nature of the gas in the discharge tube. English scientist J. J. Thomson later improved on the work of Wien by reducing the pressure to create the mass spectrograph. The word spectrograph had become part of the international scientific vocabulary by 1884. Early spectrometry devices that measured the mass-to-charge ratio of ions were called mass spectrographs which consisted of instruments that recorded a spectrum of mass values on a photographic plate. A mass spectroscope is similar to a mass spectrograph except that the beam of ions is directed onto a phosphor screen. A mass spectroscope configuration was used in early instruments when it was desired that the effects of adjustments be quickly observed. Once the instrument was properly adjusted, a photographic plate was inserted and exposed. The term mass spectroscope continued to be used even though the direct illumination of a phosphor screen was replaced by indirect measurements with an oscilloscope. The use of the term mass spectroscopy is now discouraged due to the possibility of confusion with light spectroscopy. Mass spectrometry is often abbreviated as mass-spec or simply as MS. Modern techniques of mass spectrometry were devised by Arthur Jeffrey Dempster and F.W. Aston in 1918 and 1919 respectively. Sector mass spectrometers known as calutrons were developed by Ernest O. Lawrence and used for separating the isotopes of uranium during the Manhattan Project. Calutron mass spectrometers were used for uranium enrichment at the Oak Ridge, Tennessee Y-12 plant established during World War II. In 1989, half of the Nobel Prize in Physics was awarded to Hans Dehmelt and Wolfgang Paul for the development of the ion trap technique in the 1950s and 1960s. In 2002, the Nobel Prize in Chemistry was awarded to John Bennett Fenn for the development of electrospray ionization (ESI) and Koichi Tanaka for the development of soft laser desorption (SLD) and their application to the ionization of biological macromolecules, especially proteins. Parts of a mass spectrometer A mass spectrometer consists of three components: an ion source, a mass analyzer, and a detector. The ionizer converts a portion of the sample into ions. There is a wide variety of ionization techniques, depending on the phase (solid, liquid, gas) of the sample and the efficiency of various ionization mechanisms for the unknown species. An extraction system removes ions from the sample, which are then targeted through the mass analyzer and into the detector. The differences in masses of the fragments allows the mass analyzer to sort the ions by their mass-to-charge ratio. The detector measures the value of an indicator quantity and thus provides data for calculating the abundances of each ion present. Some detectors also give spatial information, e.g., a multichannel plate. Theoretical example The following describes the operation of a spectrometer mass analyzer, which is of the sector type. (Other analyzer types are treated below.) Consider a sample of sodium chloride (table salt). In the ion source, the sample is vaporized (turned into gas) and ionized (transformed into electrically charged particles) into sodium (Na+) and chloride (Cl−) ions. Sodium atoms and ions are monoisotopic, with a mass of about 23 daltons (symbol: Da or older symbol: u). Chloride atoms and ions come in two stable isotopes with masses of approximately 35 u (at a natural abundance of about 75 percent) and approximately 37 u (at a natural abundance of about 25 percent). The analyzer part of the spectrometer contains electric and magnetic fields, which exert forces on ions traveling through these fields. The speed of a charged particle may be increased or decreased while passing through the electric field, and its direction may be altered by the magnetic field. The magnitude of the deflection of the moving ion's trajectory depends on its mass-to-charge ratio. Lighter ions are deflected by the magnetic force to a greater degree than heavier ions (based on Newton's second law of motion, F = ma). The streams of magnetically sorted ions pass from the analyzer to the detector, which records the relative abundance of each ion type. This information is used to determine the chemical element composition of the original sample (i.e. that both sodium and chlorine are present in the sample) and the isotopic composition of its constituents (the ratio of 35Cl to 37Cl). Creating ions The ion source is the part of the mass spectrometer that ionizes the material under analysis (the analyte). The ions are then transported by magnetic or electric fields to the mass analyzer. Techniques for ionization have been key to determining what types of samples can be analyzed by mass spectrometry. Electron ionization and chemical ionization are used for gases and vapors. In chemical ionization sources, the analyte is ionized by chemical ion-molecule reactions during collisions in the source. Two techniques often used with liquid and solid biological samples include electrospray ionization (invented by John Fenn) and matrix-assisted laser desorption/ionization (MALDI, initially developed as a similar technique "Soft Laser Desorption (SLD)" by K. Tanaka for which a Nobel Prize was awarded and as MALDI by M. Karas and F. Hillenkamp). Hard ionization and soft ionization In mass spectrometry, ionization refers to the production of gas phase ions suitable for resolution in the mass analyser or mass filter. Ionization occurs in the ion source. There are several ion sources available; each has advantages and disadvantages for particular applications. For example, electron ionization (EI) gives a high degree of fragmentation, yielding highly detailed mass spectra which when skilfully analysed can provide important information for structural elucidation/characterisation and facilitate identification of unknown compounds by comparison to mass spectral libraries obtained under identical operating conditions. However, EI is not suitable for coupling to HPLC, i.e. LC-MS, since at atmospheric pressure, the filaments used to generate electrons burn out rapidly. Thus EI is coupled predominantly with GC, i.e. GC-MS, where the entire system is under high vacuum. Hard ionization techniques are processes which impart high quantities of residual energy in the subject molecule invoking large degrees of fragmentation (i.e. the systematic rupturing of bonds acts to remove the excess energy, restoring stability to the resulting ion). Resultant ions tend to have m/z lower than the molecular ion (other than in the case of proton transfer and not including isotope peaks). The most common example of hard ionization is electron ionization (EI). Soft ionization refers to the processes which impart little residual energy onto the subject molecule and as such result in little fragmentation. Examples include fast atom bombardment (FAB), chemical ionization (CI), atmospheric-pressure chemical ionization (APCI), atmospheric-pressure photoionization (APPI), electrospray ionization (ESI), desorption electrospray ionization (DESI), and matrix-assisted laser desorption/ionization (MALDI). Inductively coupled plasma Inductively coupled plasma (ICP) sources are used primarily for cation analysis of a wide array of sample types. In this source, a plasma that is electrically neutral overall, but that has had a substantial fraction of its atoms ionized by high temperature, is used to atomize introduced sample molecules and to further strip the outer electrons from those atoms. The plasma is usually generated from argon gas, since the first ionization energy of argon atoms is higher than the first of any other elements except He, F and Ne, but lower than the second ionization energy of all except the most electropositive metals. The heating is achieved by a radio-frequency current passed through a coil surrounding the plasma. Photoionization mass spectrometry Photoionization can be used in experiments which seek to use mass spectrometry as a means of resolving chemical kinetics mechanisms and isomeric product branching. In such instances a high energy photon, either X-ray or uv, is used to dissociate stable gaseous molecules in a carrier gas of He or Ar. In instances where a synchrotron light source is utilized, a tuneable photon energy can be utilized to acquire a photoionization efficiency curve which can be used in conjunction with the charge ratio m/z to fingerprint molecular and ionic species. More recently atmospheric pressure photoionization (APPI) has been developed to ionize molecules mostly as effluents of LC-MS systems. Ambient ionization Some applications for ambient ionization include environmental applications as well as clinical applications. In these techniques, ions form in an ion source outside the mass spectrometer. Sampling becomes easy as the samples don't need previous separation nor preparation. Some examples of ambient ionization techniques are Direct Analysis in Real Time (DART),DESI, SESI, LAESI, desorption atmospheric-pressure chemical ionization (DAPCI), Soft Ionization by Chemical Reaction in Transfer (SICRT) and desorption atmospheric pressure photoionization DAPPI among others. Other ionization techniques Others include glow discharge, field desorption (FD), fast atom bombardment (FAB), thermospray, desorption/ionization on silicon (DIOS), atmospheric pressure chemical ionization (APCI), secondary ion mass spectrometry (SIMS), spark ionization and thermal ionization (TIMS). Mass selection Mass analyzers separate the ions according to their mass-to-charge ratio. The following two laws govern the dynamics of charged particles in electric and magnetic fields in vacuum: (Lorentz force law); (Newton's second law of motion in the non-relativistic case, i.e. valid only at ion velocity much lower than the speed of light). Here F is the force applied to the ion, m is the mass of the ion, a is the acceleration, Q is the ion charge, E is the electric field, and v × B is the vector cross product of the ion velocity and the magnetic field Equating the above expressions for the force applied to the ion yields: This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data, it is common to use the (officially) dimensionless m/z, where z is the number of elementary charges (e) on the ion (z=Q/e). This quantity, although it is informally called the mass-to-charge ratio, more accurately speaking represents the ratio of the mass number and the charge number, z. There are many types of mass analyzers, using either static or dynamic fields, and magnetic or electric fields, but all operate according to the above differential equation. Each analyzer type has its strengths and weaknesses. Many mass spectrometers use two or more mass analyzers for tandem mass spectrometry (MS/MS). In addition to the more common mass analyzers listed below, there are others designed for special situations. There are several important analyzer characteristics. The mass resolving power is the measure of the ability to distinguish two peaks of slightly different m/z. The mass accuracy is the ratio of the m/z measurement error to the true m/z. Mass accuracy is usually measured in ppm or milli mass units. The mass range is the range of m/z amenable to analysis by a given analyzer. The linear dynamic range is the range over which ion signal is linear with analyte concentration. Speed refers to the time frame of the experiment and ultimately is used to determine the number of spectra per unit time that can be generated. Sector instruments A sector field mass analyzer uses a static electric and/or magnetic field to affect the path and/or velocity of the charged particles in some way. As shown above, sector instruments bend the trajectories of the ions as they pass through the mass analyzer, according to their mass-to-charge ratios, deflecting the more charged and faster-moving, lighter ions more. The analyzer can be used to select a narrow range of m/z or to scan through a range of m/z to catalog the ions present. Time-of-flight The time-of-flight (TOF) analyzer uses an electric field to accelerate the ions through the same potential, and then measures the time they take to reach the detector. If the particles all have the same charge, their kinetic energies will be identical, and their velocities will depend only on their masses. For example, ions with a lower mass will travel faster, reaching the detector first. Ions usually are moving prior to being accelerated by the electric field, this causes particles with the same m/z to arrive at different times at the detector. This difference in initial velocities is often not dependent on the mass of the ion, and will turn into a difference in the final velocity. This distribution in velocities broadens the peaks shown on the count vs m/z plot, but will generally not change the central location of the peaks, since the starting velocity of ions is generally centered at zero. To fix this problem, time-lag focusing/delayed extraction has been coupled with TOF-MS. Quadrupole mass filter Quadrupole mass analyzers use oscillating electrical fields to selectively stabilize or destabilize the paths of ions passing through a radio frequency (RF) quadrupole field created between four parallel rods. Only the ions in a certain range of mass/charge ratio are passed through the system at any time, but changes to the potentials on the rods allow a wide range of m/z values to be swept rapidly, either continuously or in a succession of discrete hops. A quadrupole mass analyzer acts as a mass-selective filter and is closely related to the quadrupole ion trap, particularly the linear quadrupole ion trap except that it is designed to pass the untrapped ions rather than collect the trapped ones, and is for that reason referred to as a transmission quadrupole. A magnetically enhanced quadrupole mass analyzer includes the addition of a magnetic field, either applied axially or transversely. This novel type of instrument leads to an additional performance enhancement in terms of resolution and/or sensitivity depending upon the magnitude and orientation of the applied magnetic field. A common variation of the transmission quadrupole is the triple quadrupole mass spectrometer. The "triple quad" has three consecutive quadrupole stages, the first acting as a mass filter to transmit a particular incoming ion to the second quadrupole, a collision chamber, wherein that ion can be broken into fragments. The third quadrupole also acts as a mass filter, to transmit a particular fragment ion to the detector. If a quadrupole is made to rapidly and repetitively cycle through a range of mass filter settings, full spectra can be reported. Likewise, a triple quad can be made to perform various scan types characteristic of tandem mass spectrometry. Ion traps Three-dimensional quadrupole ion trap The quadrupole ion trap works on the same physical principles as the quadrupole mass analyzer, but the ions are trapped and sequentially ejected. Ions are trapped in a mainly quadrupole RF field, in a space defined by a ring electrode (usually connected to the main RF potential) between two endcap electrodes (typically connected to DC or auxiliary AC potentials). The sample is ionized either internally (e.g. with an electron or laser beam), or externally, in which case the ions are often introduced through an aperture in an endcap electrode. There are many mass/charge separation and isolation methods but the most commonly used is the mass instability mode in which the RF potential is ramped so that the orbit of ions with a mass are stable while ions with mass b become unstable and are ejected on the z-axis onto a detector. There are also non-destructive analysis methods. Ions may also be ejected by the resonance excitation method, whereby a supplemental oscillatory excitation voltage is applied to the endcap electrodes, and the trapping voltage amplitude and/or excitation voltage frequency is varied to bring ions into a resonance condition in order of their mass/charge ratio. Cylindrical ion trap The cylindrical ion trap mass spectrometer (CIT) is a derivative of the quadrupole ion trap where the electrodes are formed from flat rings rather than hyperbolic shaped electrodes. The architecture lends itself well to miniaturization because as the size of a trap is reduced, the shape of the electric field near the center of the trap, the region where the ions are trapped, forms a shape similar to that of a hyperbolic trap. Linear quadrupole ion trap A linear quadrupole ion trap is similar to a quadrupole ion trap, but it traps ions in a two dimensional quadrupole field, instead of a three-dimensional quadrupole field as in a 3D quadrupole ion trap. Thermo Fisher's LTQ ("linear trap quadrupole") is an example of the linear ion trap. A toroidal ion trap can be visualized as a linear quadrupole curved around and connected at the ends or as a cross-section of a 3D ion trap rotated on edge to form the toroid, donut-shaped trap. The trap can store large volumes of ions by distributing them throughout the ring-like trap structure. This toroidal shaped trap is a configuration that allows the increased miniaturization of an ion trap mass analyzer. Additionally, all ions are stored in the same trapping field and ejected together simplifying detection that can be complicated with array configurations due to variations in detector alignment and machining of the arrays. As with the toroidal trap, linear traps and 3D quadrupole ion traps are the most commonly miniaturized mass analyzers due to their high sensitivity, tolerance for mTorr pressure, and capabilities for single analyzer tandem mass spectrometry (e.g. product ion scans). Orbitrap Orbitrap instruments are similar to Fourier-transform ion cyclotron resonance mass spectrometers (see text below). Ions are electrostatically trapped in an orbit around a central, spindle shaped electrode. The electrode confines the ions so that they both orbit around the central electrode and oscillate back and forth along the central electrode's long axis. This oscillation generates an image current in the detector plates which is recorded by the instrument. The frequencies of these image currents depend on the mass-to-charge ratios of the ions. Mass spectra are obtained by Fourier transformation of the recorded image currents. Orbitraps have a high mass accuracy, high sensitivity and a good dynamic range. Fourier-transform ion cyclotron resonance Fourier-transform mass spectrometry (FTMS), or more precisely Fourier-transform ion cyclotron resonance MS, measures mass by detecting the image current produced by ions cyclotroning in the presence of a magnetic field. Instead of measuring the deflection of ions with a detector such as an electron multiplier, the ions are injected into a Penning trap (a static electric/magnetic ion trap) where they effectively form part of a circuit. Detectors at fixed positions in space measure the electrical signal of ions which pass near them over time, producing a periodic signal. Since the frequency of an ion's cycling is determined by its mass-to-charge ratio, this can be deconvoluted by performing a Fourier transform on the signal. FTMS has the advantage of high sensitivity (since each ion is "counted" more than once) and much higher resolution and thus precision. Ion cyclotron resonance (ICR) is an older mass analysis technique similar to FTMS except that ions are detected with a traditional detector. Ions trapped in a Penning trap are excited by an RF electric field until they impact the wall of the trap, where the detector is located. Ions of different mass are resolved according to impact time. Detectors The final element of the mass spectrometer is the detector. The detector records either the charge induced or the current produced when an ion passes by or hits a surface. In a scanning instrument, the signal produced in the detector during the course of the scan versus where the instrument is in the scan (at what m/Q) will produce a mass spectrum, a record of ions as a function of m/Q. Typically, some type of electron multiplier is used, though other detectors including Faraday cups and ion-to-photon detectors are also used. Because the number of ions leaving the mass analyzer at a particular instant is typically quite small, considerable amplification is often necessary to get a signal. Microchannel plate detectors are commonly used in modern commercial instruments. In FTMS and Orbitraps, the detector consists of a pair of metal surfaces within the mass analyzer/ion trap region which the ions only pass near as they oscillate. No direct current is produced, only a weak AC image current is produced in a circuit between the electrodes. Other inductive detectors have also been used. Tandem mass spectrometry A tandem mass spectrometer is one capable of multiple rounds of mass spectrometry, usually separated by some form of molecule fragmentation. For example, one mass analyzer can isolate one peptide from many entering a mass spectrometer. A collision cell then stabilizes the peptide ions while they collide with a gas, causing them to fragment by collision-induced dissociation (CID). A further mass analyzer then sorts the fragments produced from the peptides. Tandem MS can also be done in a single mass analyzer over time, as in a quadrupole ion trap. There are various methods for fragmenting molecules for tandem MS, including collision-induced dissociation (CID), electron capture dissociation (ECD), electron transfer dissociation (ETD), infrared multiphoton dissociation (IRMPD), blackbody infrared radiative dissociation (BIRD), electron-detachment dissociation (EDD) and surface-induced dissociation (SID). An important application using tandem mass spectrometry is in protein identification. Tandem mass spectrometry enables a variety of experimental sequences. Many commercial mass spectrometers are designed to expedite the execution of such routine sequences as selected reaction monitoring (SRM), precursor ion scanning, product ion scanning, and neutral loss scanning. In SRM, the first analyzer allows only a single mass through and the second analyzer monitors for multiple user-defined fragment ions over longer dwell-times than could be achieved in a full scan. This increases sensitivity. In product ion scans, the first mass analyzer is fixed to select a particular precursor ion ("parent"), while the second is scanned to find all the fragments ("products", or "daughter ions") to which it can be fragmented in the collision cell. In precursor ion scans, the second mass analyzer is fixed to select a particular fragment ion ("daughter"), while the first is scanned to find all possible precursor ions that could give rise to this fragment. In neutral loss scans, the two mass analyzers are scanned in parallel, but separated by the mass of a molecular subunit of interest to the analyst. Ions are detected if they lose that fixed mass during fragmentation. This can be used to look for any chemical that is capable of losing a particular neutral group, for example a sugar residue. Together, neutral loss and precursor ion scans can be used to hunt for chemicals with particular motifs. Another type of tandem mass spectrometry used for radiocarbon dating is accelerator mass spectrometry (AMS), which uses very high voltages, usually in the mega-volt range, to accelerate negative ions into a type of tandem mass spectrometer. The METLIN Metabolite and Chemical Entity Database is the largest repository of experimental tandem mass spectrometry data acquired from standards. The tandem mass spectrometry data on over 930,000 molecular standards (as of January 2024) is provided to facilitate the identification of chemical entities from tandem mass spectrometry experiments. In addition to the identification of known molecules it is also useful for identifying unknowns using its similarity searching/analysis. All tandem mass spectrometry data comes from the experimental analysis of standards at multiple collision energies and in both positive and negative ionization modes. Common mass spectrometer configurations and techniques When a specific combination of source, analyzer, and detector becomes conventional in practice, a compound acronym may arise to designate it succinctly. One example is MALDI-TOF, which refers to a combination of a matrix-assisted laser desorption/ionization source with a time-of-flight mass analyzer. Other examples include inductively coupled plasma-mass spectrometry (ICP-MS), accelerator mass spectrometry (AMS), thermal ionization-mass spectrometry (TIMS) and spark source mass spectrometry (SSMS). Certain applications of mass spectrometry have developed monikers that although strictly speaking would seem to refer to a broad application, in practice have come instead to connote a specific or a limited number of instrument configurations. An example of this is isotope-ratio mass spectrometry (IRMS), which refers in practice to the use of a limited number of sector based mass analyzers; this name is used to refer to both the application and the instrument used for the application. Separation techniques combined with mass spectrometry An important enhancement to the mass resolving and mass determining capabilities of mass spectrometry is using it in tandem with chromatographic and other separation techniques. Gas chromatography A common combination is gas chromatography-mass spectrometry (GC/MS or GC-MS). In this technique, a gas chromatograph is used to separate different compounds. This stream of separated compounds is fed online into the ion source, a metallic filament to which voltage is applied. This filament emits electrons which ionize the compounds. The ions can then further fragment, yielding predictable patterns. Intact ions and fragments pass into the mass spectrometer's analyzer and are eventually detected. However, the high temperatures (300 °C) used in the GC-MS injection port (and oven) can result in thermal degradation of injected molecules, thus resulting in the measurement of degradation products instead of the actual molecule(s) of interest. Liquid chromatography Similarly to gas chromatography MS (GC-MS), liquid chromatography-mass spectrometry (LC/MS or LC-MS) separates compounds chromatographically before they are introduced to the ion source and mass spectrometer. It differs from GC-MS in that the mobile phase is liquid, usually a mixture of water and organic solvents, instead of gas. Most commonly, an electrospray ionization source is used in LC-MS. Other popular and commercially available LC-MS ion sources are atmospheric pressure chemical ionization and atmospheric pressure photoionization. There are also some newly developed ionization techniques like laser spray. Capillary electrophoresis–mass spectrometry Capillary electrophoresis–mass spectrometry (CE-MS) is a technique that combines the liquid separation process of capillary electrophoresis with mass spectrometry. CE-MS is typically coupled to electrospray ionization. Ion mobility Ion mobility spectrometry-mass spectrometry (IMS/MS or IMMS) is a technique where ions are first separated by drift time through some neutral gas under an applied electrical potential gradient before being introduced into a mass spectrometer. Drift time is a measure of the collisional cross section relative to the charge of the ion. The duty cycle of IMS (the time over which the experiment takes place) is longer than most mass spectrometric techniques, such that the mass spectrometer can sample along the course of the IMS separation. This produces data about the IMS separation and the mass-to-charge ratio of the ions in a manner similar to LC-MS. The duty cycle of IMS is short relative to liquid chromatography or gas chromatography separations and can thus be coupled to such techniques, producing triple modalities such as LC/IMS/MS. Data and analysis Data representations Mass spectrometry produces various types of data. The most common data representation is the mass spectrum. Certain types of mass spectrometry data are best represented as a mass chromatogram. Types of chromatograms include selected ion monitoring (SIM), total ion current (TIC), and selected reaction monitoring (SRM), among many others. Other types of mass spectrometry data are well represented as a three-dimensional contour map. In this form, the mass-to-charge, m/z is on the x-axis, intensity the y-axis, and an additional experimental parameter, such as time, is recorded on the z-axis. Data analysis Mass spectrometry data analysis is specific to the type of experiment producing the data. General subdivisions of data are fundamental to understanding any data. Many mass spectrometers work in either negative ion mode or positive ion mode. It is very important to know whether the observed ions are negatively or positively charged. This is often important in determining the neutral mass but it also indicates something about the nature of the molecules. Different types of ion source result in different arrays of fragments produced from the original molecules. An electron ionization source produces many fragments and mostly single-charged (1-) radicals (odd number of electrons), whereas an electrospray source usually produces non-radical quasimolecular ions that are frequently multiply charged. Tandem mass spectrometry purposely produces fragment ions post-source and can drastically change the sort of data achieved by an experiment. Knowledge of the origin of a sample can provide insight into the component molecules of the sample and their fragmentations. A sample from a synthesis/manufacturing process will probably contain impurities chemically related to the target component. A crudely prepared biological sample will probably contain a certain amount of salt, which may form adducts with the analyte molecules in certain analyses. Results can also depend heavily on sample preparation and how it was run/introduced. An important example is the issue of which matrix is used for MALDI spotting, since much of the energetics of the desorption/ionization event is controlled by the matrix rather than the laser power. Sometimes samples are spiked with sodium or another ion-carrying species to produce adducts rather than a protonated species. Mass spectrometry can measure molar mass, molecular structure, and sample purity. Each of these questions requires a different experimental procedure; therefore, adequate definition of the experimental goal is a prerequisite for collecting the proper data and successfully interpreting it. Interpretation of mass spectra Since the precise structure or peptide sequence of a molecule is deciphered through the set of fragment masses, the interpretation of mass spectra requires combined use of various techniques. Usually the first strategy for identifying an unknown compound is to compare its experimental mass spectrum against a library of mass spectra. If no matches result from the search, then manual interpretation or software assisted interpretation of mass spectra must be performed. Computer simulation of ionization and fragmentation processes occurring in mass spectrometer is the primary tool for assigning structure or peptide sequence to a molecule. An a priori structural information is fragmented in silico and the resulting pattern is compared with observed spectrum. Such simulation is often supported by a fragmentation library that contains published patterns of known decomposition reactions. Software taking advantage of this idea has been developed for both small molecules and proteins. Analysis of mass spectra can also be spectra with accurate mass. A mass-to-charge ratio value (m/z) with only integer precision can represent an immense number of theoretically possible ion structures; however, more precise mass figures significantly reduce the number of candidate molecular formulas. A computer algorithm called formula generator calculates all molecular formulas that theoretically fit a given mass with specified tolerance. A recent technique for structure elucidation in mass spectrometry, called precursor ion fingerprinting, identifies individual pieces of structural information by conducting a search of the tandem spectra of the molecule under investigation against a library of the product-ion spectra of structurally characterized precursor ions. Applications Mass spectrometry has both qualitative and quantitative uses. These include identifying unknown compounds, determining the isotopic composition of elements in a molecule, and determining the structure of a compound by observing its fragmentation. Other uses include quantifying the amount of a compound in a sample or studying the fundamentals of gas phase ion chemistry (the chemistry of ions and neutrals in a vacuum). MS is now commonly used in analytical laboratories that study physical, chemical, or biological properties of a great variety of compounds. Quantification can be relative (analyzed relative to a reference sample) or absolute (analyzed using a standard curve method). As an analytical technique it possesses distinct advantages such as: Increased sensitivity over most other analytical techniques because the analyzer, as a mass-charge filter, reduces background interference, Excellent specificity from characteristic fragmentation patterns to identify unknowns or confirm the presence of suspected compounds, Information about molecular weight, Information about the isotopic abundance of elements, Temporally resolved chemical data. A few of the disadvantages of the method is that it often fails to distinguish between optical and geometrical isomers and the positions of substituent in o-, m- and p- positions in an aromatic ring. Also, its scope is limited in identifying hydrocarbons that produce similar fragmented ions. Isotope ratio MS: isotope dating and tracing Mass spectrometry is also used to determine the isotopic composition of elements within a sample. Differences in mass among isotopes of an element are very small, and the less abundant isotopes of an element are typically very rare, so a very sensitive instrument is required. These instruments, sometimes referred to as isotope ratio mass spectrometers (IR-MS), usually use a single magnet to bend a beam of ionized particles towards a series of Faraday cups which convert particle impacts to electric current. A fast on-line analysis of deuterium content of water can be done using flowing afterglow mass spectrometry, FA-MS. Probably the most sensitive and accurate mass spectrometer for this purpose is the accelerator mass spectrometer (AMS). This is because it provides ultimate sensitivity, capable of measuring individual atoms and measuring nuclides with a dynamic range of ~1015 relative to the major stable isotope. Isotope ratios are important markers of a variety of processes. Some isotope ratios are used to determine the age of materials for example as in carbon dating. Labeling with stable isotopes is also used for protein quantification. (see protein characterization below) Membrane-introduction mass spectrometry: measuring gases in solution Membrane-introduction mass spectrometry combines the isotope ratio MS with a reaction chamber/cell separated by a gas-permeable membrane. This method allows the study of gases as they evolve in solution. This method has been extensively used for the study of the production of oxygen by Photosystem II. Trace gas analysis Several techniques use ions created in a dedicated ion source injected into a flow tube or a drift tube: selected ion flow tube (SIFT-MS), and proton transfer reaction (PTR-MS), are variants of chemical ionization dedicated for trace gas analysis of air, breath or liquid headspace using well defined reaction time allowing calculations of analyte concentrations from the known reaction kinetics without the need for internal standard or calibration. Another technique with applications in trace gas analysis field is secondary electrospray ionization (SESI-MS), which is a variant of electrospray ionization. SESI consist of an electrospray plume of pure acidified solvent that interacts with neutral vapors.  Vapor molecules get ionized at atmospheric pressure when charge is transferred from the ions formed in the electrospray to the molecules. One advantage of this approach is that it is compatible with most ESI-MS systems. Residual gas analysis Atom probe An atom probe is an instrument that combines time-of-flight mass spectrometry and field-evaporation microscopy to map the location of individual atoms. Pharmacokinetics Pharmacokinetics is often studied using mass spectrometry because of the complex nature of the matrix (often blood or urine) and the need for high sensitivity to observe low dose and long time point data. The most common instrumentation used in this application is LC-MS with a triple quadrupole mass spectrometer. Tandem mass spectrometry is usually employed for added specificity. Standard curves and internal standards are used for quantitation of usually a single pharmaceutical in the samples. The samples represent different time points as a pharmaceutical is administered and then metabolized or cleared from the body. Blank or t=0 samples taken before administration are important in determining background and ensuring data integrity with such complex sample matrices. Much attention is paid to the linearity of the standard curve; however it is not uncommon to use curve fitting with more complex functions such as quadratics since the response of most mass spectrometers is less than linear across large concentration ranges. There is currently considerable interest in the use of very high sensitivity mass spectrometry for microdosing studies, which are seen as a promising alternative to animal experimentation. Recent studies show that secondary electrospray ionization (SESI) is a powerful technique to monitor drug kinetics via breath analysis. Because breath is naturally produced, several datapoints can be readily collected. This allows for the number of collected data-points to be greatly increased. In animal studies, this approach SESI can reduce animal sacrifice. In humans, SESI-MS non-invasive analysis of breath can help study the kinetics of drugs at a personalized level. Protein characterization Mass spectrometry is an important method for the characterization and sequencing of proteins. The two primary methods for ionization of whole proteins are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In keeping with the performance and mass range of available mass spectrometers, two approaches are used for characterizing proteins. In the first, intact proteins are ionized by either of the two techniques described above, and then introduced to a mass analyzer. This approach is referred to as "top-down" strategy of protein analysis. The top-down approach however is largely limited to low-throughput single-protein studies. In the second, proteins are enzymatically digested into smaller peptides using proteases such as trypsin or pepsin, either in solution or in gel after electrophoretic separation. Other proteolytic agents are also used. The collection of peptide products are often separated by chromatography prior to introduction to the mass analyzer. When the characteristic pattern of peptides is used for the identification of the protein the method is called peptide mass fingerprinting (PMF), if the identification is performed using the sequence data determined in tandem MS analysis it is called de novo peptide sequencing. These procedures of protein analysis are also referred to as the "bottom-up" approach, and have also been used to analyse the distribution and position of post-translational modifications such as phosphorylation on proteins. A third approach is also beginning to be used, this intermediate "middle-down" approach involves analyzing proteolytic peptides that are larger than the typical tryptic peptide. Space exploration As a standard method for analysis, mass spectrometers have reached other planets and moons. Two were taken to Mars by the Viking program. In early 2005 the Cassini–Huygens mission delivered a specialized GC-MS instrument aboard the Huygens probe through the atmosphere of Titan, the largest moon of the planet Saturn. This instrument analyzed atmospheric samples along its descent trajectory and was able to vaporize and analyze samples of Titan's frozen, hydrocarbon covered surface once the probe had landed. These measurements compare the abundance of isotope(s) of each particle comparatively to earth's natural abundance. Also on board the Cassini–Huygens spacecraft was an ion and neutral mass spectrometer which had been taking measurements of Titan's atmospheric composition as well as the composition of Enceladus' plumes. A Thermal and Evolved Gas Analyzer mass spectrometer was carried by the Mars Phoenix Lander launched in 2007. Mass spectrometers are also widely used in space missions to measure the composition of plasmas. For example, the Cassini spacecraft carried the Cassini Plasma Spectrometer (CAPS), which measured the mass of ions in Saturn's magnetosphere. Respired gas monitor Mass spectrometers were used in hospitals for respiratory gas analysis beginning around 1975 through the end of the century. Some are probably still in use but none are currently being manufactured. Found mostly in the operating room, they were a part of a complex system, in which respired gas samples from patients undergoing anesthesia were drawn into the instrument through a valve mechanism designed to sequentially connect up to 32 rooms to the mass spectrometer. A computer directed all operations of the system. The data collected from the mass spectrometer was delivered to the individual rooms for the anesthesiologist to use. The uniqueness of this magnetic sector mass spectrometer may have been the fact that a plane of detectors, each purposely positioned to collect all of the ion species expected to be in the samples, allowed the instrument to simultaneously report all of the gases respired by the patient. Although the mass range was limited to slightly over 120 u, fragmentation of some of the heavier molecules negated the need for a higher detection limit. Preparative mass spectrometry The primary function of mass spectrometry is as a tool for chemical analyses based on detection and quantification of ions according to their mass-to-charge ratio. However, mass spectrometry also shows promise for material synthesis. Ion soft landing is characterized by deposition of intact species on surfaces at low kinetic energies which precludes the fragmentation of the incident species. The soft landing technique was first reported in 1977 for the reaction of low energy sulfur containing ions on a lead surface. See also Bioelectrospray Dumas method of molecular weight determination Evolved gas analysis Helium mass spectrometer Isotope dilution MassBank (database), a Japanese spectral database Mass spectrometry imaging Mass spectrometry software MasSpec Pen Micro-arrays for mass spectrometry Nanoscale secondary ion mass spectrometry Reflectron References Bibliography External links Interactive tutorial on mass spectra National High Magnetic Field Laboratory Mass spectrometer simulation An interactive application simulating the console of a mass spectrometer Realtime Mass Spectra simulation Tool to simulate mass spectra in the browser Chemical pathology Spectrometers Measuring instruments Scientific instruments Scientific techniques Clinical pathology
Mass spectrometry
[ "Physics", "Chemistry", "Technology", "Engineering", "Biology" ]
9,527
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Scientific instruments", "Mass spectrometry", "Biochemistry", "Spectrometers", "Chemical pathology", "Spectroscopy", "Matter" ]
283,829
https://en.wikipedia.org/wiki/Ebonite
Ebonite is a brand name for a material generically known as hard rubber or vulcanite, obtained via vulcanizing natural rubber for prolonged periods. Ebonite may contain from 25% to 80% sulfur and linseed oil. Its name comes from its intended use as an artificial substitute for ebony wood. The material has also been called vulcanite, although that name formally refers to the mineral vulcanite. Charles Goodyear's brother, Nelson Goodyear, experimented with the chemistry of ebonite composites. In 1851, he used zinc oxide as a filler. Hugh Silver was responsible for giving it its name. Properties The sulfur percentage and the applied temperatures and duration of vulcanizing are the main variables that determine the technical properties of the hard rubber polysulfide elastomer. The occurring reaction is basically addition of sulfur at the double bonds, forming intramolecular ring structures, so a large portion of the sulfur is highly cross-linked in the form of intramolecular addition. As a result of having a maximum sulfur content up to 40%, it may be used to resist swelling and minimize dielectric loss. The strongest mechanical properties and greatest heat resistance is obtained with sulfur contents around 35% while the highest impact strength can be obtained with a lower sulfur content of 30%. The rigidity of hard rubber at room temperature is attributed to the van der Waals forces between the intramolecular sulfur atoms. Raising the temperature gradually increases the molecular vibrations that overcome the van der Waals forces making it elastic. Hard rubber has a content mixture dependent density around 1.1 to 1.2. When reheated hard rubber exhibits shape-memory effect and can be fairly easily reshaped within certain limits. Depending on the sulfur percentage hard rubber has a thermoplastic transition or softening temperature of . The material is brittle, which produces problems in its use in battery cases for example, where the integrity of the case is vital to prevent leakage of sulfuric acid. It has now been generally replaced by carbon black-filled polypropylene. Ultraviolet and daylight exposure Under the influence of the ultraviolet portion of daylight, hard rubber oxidizes. Subsequent exposure to moisture bonds water with free sulfur on the surface, creating sulfates and sulfuric acid at the surface that are very hygroscopic. The sulfates condense water from the air, forming a hydrophilic film with favorable wettability characteristics on the surface. These aging processes will gradually discolor the surface grayish green to brown and cause rapid deterioration of electric surface resistivity. Contamination Contaminated ebonite was problematic when it was used for electronics. During manufacturing the ebonite was rolled between metal foil sheets, which were peeled off, leaving traces of metal behind. For electronic use the surface was ground to remove these metal particles. Applications Hard rubber was used in early 20th century bowling balls; however, it was phased out in favor of other materials (the Ebonite name remains as a trade name for one of the major manufacturers of polymer balls). It has been used in electric plugs, tobacco pipe mouthpieces (in competition with Lucite), fishing reels, hockey pucks, fountain pen bodies and nib feeds, saxophone and clarinet mouthpieces, as well as complete humidity-stable clarinets. Hard rubber is often seen as the wheel material in casters. It is also commonly used in physics classrooms to demonstrate static electricity, because it is at or near the negative end of the triboelectric series. Hard rubber was used in the cases of automobile batteries for years, thus establishing black as their traditional colour even long after stronger modern plastics like polypropylene were substituted. It was used for decades in hair combs made by Ace, now part of Newell Rubbermaid, although the current models are known to be produced solely with plastics. Ebonite is used as an anticorrosive lining for various (mainly storage) vessels that contain diluted hydrochloric acid. It forms bubbles when storing hydrofluoric acid at temperatures above room temperature, or for prolonged durations. See also Bakelite, an early mainstream plastic material. Tenite, a cellulosic thermoplastic material. References Materials Synthetic resins Organic polymers
Ebonite
[ "Physics", "Chemistry" ]
870
[ "Organic polymers", "Synthetic resins", "Synthetic materials", "Organic compounds", "Materials", "Matter" ]
284,369
https://en.wikipedia.org/wiki/Babinet%27s%20principle
In physics, Babinet's principle states that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape except for the overall forward beam intensity. It was formulated in the 1800s by French physicist Jacques Babinet. A quantum version of Babinet's principle has been derived in the context of quantum networks. Explanation Assume B is the original diffracting body, and B' is its complement, i.e., a body that is transparent. The sum of the radiation patterns caused by B and B' must be the same as the radiation pattern of the unobstructed beam. In places where the undisturbed beam would not have reached, this means that the radiation patterns caused by B and B' must be opposite in phase, but equal in amplitude. Diffraction patterns from apertures or bodies of known size and shape are compared with the pattern from the object to be measured. For instance, the size of red blood cells can be found by comparing their diffraction pattern with an array of small holes. One consequence of Babinet's principle is the extinction paradox, which states that in the diffraction limit, the radiation removed from the beam due to a particle is equal to twice the particle's cross section times the flux. This is because the amount of radiation absorbed or reflected is equal to the flux through the particle's cross-section, but by Babinet's principle the light diffracted forward is the same as the light that would pass through a hole in the shape of a particle; so amount of the light diffracted forward also equals the flux through the particle's cross section. The principle is most often used in optics but it is also true for other forms of electromagnetic radiation and is, in fact, a general theorem of diffraction in wave mechanics. Babinet's principle finds most use in its ability to detect equivalence in size and shape. Demonstration experiment The effect can be simply observed by using a laser. First place a thin (approx. 0.1 mm) wire into the laser beam and observe the diffraction pattern. Then observe the diffraction pattern when the laser is shone through a narrow slit. The slit can be made either by using a laser printer or photocopier to print onto clear plastic film or by using a pin to draw a line on a piece of glass that has been smoked over a candle flame. Babinet's principle in radiofrequency structures Babinet's principle can be used in antenna engineering to find complementary impedances. A consequence of the principle states that where Zmetal and Zslot are input impedances of the metal and slot radiating pieces, and is the intrinsic impedance of the media in which the structure is immersed. In addition, Zslot is not only the impedance of the slot, but can be viewed as the complementary structure impedance (a dipole or loop in many cases). Zmetal is often referred to as Zscreen, where the screen comes from the optical definition. The thin sheet or screen does not have to be metal, but rather any material that supports a (current density vector) leading to a magnetic potential . One issue with this equation is that the screen must be relatively thin to the given wavelength (or range thereof). If it is not, modes can begin to form, or fringing fields may no longer be negligible. Note that Babinet's principle does not account for polarization. In 1946, H. G. Booker published Slot Aerials and Their Relation to Complementary Wire Aerials to extend Babinet's principle to account for polarization (otherwise known as Booker's extension). This information is drawn from, as stated above, Balanis's third edition Antenna Theory textbook. See also Bistatic radar References External links Light Diffraction and Babinet Principle PhysicsOpenLab Physical paradoxes Diffraction Antennas (radio)
Babinet's principle
[ "Physics", "Chemistry", "Materials_science" ]
825
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
285,048
https://en.wikipedia.org/wiki/Muon-catalyzed%20fusion
Muon-catalyzed fusion (abbreviated as μCF or MCF) is a process allowing nuclear fusion to take place at temperatures significantly lower than the temperatures required for thermonuclear fusion, even at room temperature or lower. It is one of the few known ways of catalyzing nuclear fusion reactions. Muons are unstable subatomic particles which are similar to electrons but 207 times more massive. If a muon replaces one of the electrons in a hydrogen molecule, the nuclei are consequently drawn 186 times closer than in a normal molecule, due to the reduced mass being 186 times the mass of an electron. When the nuclei move closer together, the fusion probability increases, to the point where a significant number of fusion events can happen at room temperature. Methods for obtaining muons, however, require far more energy than can be produced by the resulting fusion reactions. Muons have a mean lifetime of , much longer than many other subatomic particles but nevertheless far too brief to allow their useful storage. To create useful room-temperature muon-catalyzed fusion, reactors would need a cheap, efficient muon source and/or a way for each individual muon to catalyze many more fusion reactions. History Andrei Sakharov and F.C. Frank predicted the phenomenon of muon-catalyzed fusion on theoretical grounds before 1950. Yakov Borisovich Zel'dovich also wrote about the phenomenon of muon-catalyzed fusion in 1954. Luis W. Alvarez et al., when analyzing the outcome of some experiments with muons incident on a hydrogen bubble chamber at Berkeley in 1956, observed muon-catalysis of exothermic p–d, proton and deuteron, nuclear fusion, which results in a helion, a gamma ray, and a release of about 5.5 MeV of energy. The Alvarez experimental results, in particular, spurred John David Jackson to publish one of the first comprehensive theoretical studies of muon-catalyzed fusion in his ground-breaking 1957 paper. This paper contained the first serious speculations on useful energy release from muon-catalyzed fusion. Jackson concluded that it would be impractical as an energy source, unless the "alpha-sticking problem" (see below) could be solved, leading potentially to an energetically cheaper and more efficient way of utilizing the catalyzing muons. Viability as a power source Potential benefits If muon-catalyzed d–t nuclear fusion is realized practically, it will be a much more attractive way of generating power than conventional nuclear fission reactors because muon-catalyzed d–t nuclear fusion (like most other types of nuclear fusion), produces far fewer harmful (and far less long-lived) radioactive wastes. The large number of neutrons produced in muon-catalyzed d–t nuclear fusions may be used to breed fissile fuels from fertile material – for example, thorium-232 could breed uranium-233 in this way. The fissile fuels that have been bred can then be "burned," either in a conventional critical nuclear fission reactor or in an unconventional subcritical fission reactor, for example, a reactor using nuclear transmutation to process nuclear waste, or a reactor using the energy amplifier concept devised by Carlo Rubbia and others. Another benefit of muon-catalyzed fusion is that the fusion process can start with pure deuterium gas without tritium. Plasma fusion reactors like ITER or Wendelstein X7 need tritium to initiate and also need a tritium factory. Muon-catalyzed fusion generates tritium under operation and increases operating efficiency up to an optimum point when the deuterium:tritium ratio reaches about 1:1. Muon-catalyzed fusion can operate as a tritium factory and deliver tritium for material and plasma fusion research. Problems facing practical exploitation Except for some refinements, little has changed since Jackson's 1957 assessment of the feasibility of muon-catalyzed fusion other than Vesman's 1967 prediction of the hyperfine resonant formation of the muonic (d–μ–t)+ molecular ion which was subsequently experimentally observed. This helped spark renewed interest in the whole field of muon-catalyzed fusion, which remains an active area of research worldwide. However, as Jackson observed in his paper, muon-catalyzed fusion is "unlikely" to provide "useful power production ... unless an energetically cheaper way of producing μ−-mesons can be found." One practical problem with the muon-catalyzed fusion process is that muons are unstable, decaying in (in their rest frame). Hence, there needs to be some cheap means of producing muons, and the muons must be arranged to catalyze as many nuclear fusion reactions as possible before decaying. Another, and in many ways more serious, problem is the "alpha-sticking" problem, which was recognized by Jackson in his 1957 paper. The α-sticking problem is the approximately 1% probability of the muon "sticking" to the alpha particle that results from deuteron-triton nuclear fusion, thereby effectively removing the muon from the muon-catalysis process altogether. Even if muons were absolutely stable, each muon could catalyze, on average, only about 100 d-t fusions before sticking to an alpha particle, which is only about one-fifth the number of muon catalyzed d–t fusions needed for break-even, where as much thermal energy is generated as electrical energy is consumed to produce the muons in the first place, according to Jackson's rough estimate. More recent measurements seem to point to more encouraging values for the α-sticking probability, finding the α-sticking probability to be around 0.3% to 0.5%, which could mean as many as about 200 (even up to 350) muon-catalyzed d–t fusions per muon. Indeed, the team led by Steven E. Jones achieved 150 d–t fusions per muon (average) at the Los Alamos Meson Physics Facility. The results were promising and almost enough to reach theoretical break-even. Unfortunately, these measurements for the number of muon-catalyzed d–t fusions per muon are still not enough to reach industrial break-even. Even with break-even, the conversion efficiency from thermal energy to electrical energy is only about 40% or so, further limiting viability. The best recent estimates of the electrical "energy cost" per muon is about with accelerators that are (coincidentally) about 40% efficient at transforming electrical energy from the power grid into acceleration of the deuterons. As of 2012, no practical method of producing energy through this means has been published, although some discoveries using the Hall effect show promise. Alternative estimation of breakeven According to Gordon Pusch, a physicist at Argonne National Laboratory, various breakeven calculations on muon-catalyzed fusion omit the heat energy the muon beam itself deposits in the target. By taking this factor into account, muon-catalyzed fusion can already exceed breakeven; however, the recirculated power is usually very large compared to power out to the electrical grid (about 3–5 times as large, according to estimates). Despite this rather high recirculated power, the overall cycle efficiency is comparable to conventional fission reactors; however the need for 4–6 MW electrical generating capacity for each megawatt out to the grid probably represents an unacceptably large capital investment. Pusch suggested using Bogdan Maglich's "migma" self-colliding beam concept to significantly increase the muon production efficiency, by eliminating target losses, and using tritium nuclei as the driver beam, to optimize the number of negative muons. In 2021, Kelly, Hart and Rose produced a μCF model whereby the ratio, Q, of thermal energy produced to the kinetic energy of the accelerated deuterons used to create negative pions (and thus negative muons through pion decay) was optimized. In this model, the heat energy of the incoming deuterons as well as that of the particles produced due to the deuteron beam impacting a tungsten target was recaptured to the extent possible, as suggested by Gordon Pusch in the previous paragraph. Additionally, heat energy due to tritium breeding in a lithium-lead shell was recaptured, as suggested by Jändel, Danos and Rafelski in 1988. The best Q value was found to be about 130% assuming that 50% of the muons produced were actually utilized for fusion catalysis. Furthermore, assuming that the accelerator was 18% efficient at transforming electrical energy into deuteron kinetic energy and conversion efficiency of heat energy into electrical energy of 60%, they estimate that, currently, the amount of electrical energy that could be produced by a μCF reactor would be 14% of the electrical energy consumed. In order for this to improve, they suggest that some combination of a) increasing accelerator efficiency and b) increasing the number of fusion reactions per negative muon above the assumed level of 150 would be needed. Process To create this effect, a stream of negative muons, most often created by decaying pions, is sent to a block that may be made up of all three hydrogen isotopes (protium, deuterium, and/or tritium), where the block is usually frozen, and the block may be at temperatures of about 3 kelvin (−270 degrees Celsius) or so. The muon may bump the electron from one of the hydrogen isotopes. The muon, 207 times more massive than the electron, effectively shields and reduces the electromagnetic repulsion between two nuclei and draws them much closer into a covalent bond than an electron can. Because the nuclei are so close, the strong nuclear force is able to kick in and bind both nuclei together. They fuse, release the catalytic muon (most of the time), and part of the original mass of both nuclei is released as energetic particles, as with any other type of nuclear fusion. The release of the catalytic muon is critical to continue the reactions. The majority of the muons continue to bond with other hydrogen isotopes and continue fusing nuclei together. However, not all of the muons are recycled: some bond with other debris emitted following the fusion of the nuclei (such as alpha particles and helions), removing the muons from the catalytic process. This gradually chokes off the reactions, as there are fewer and fewer muons with which the nuclei may bond. The number of reactions achieved in the lab can be as high as 150 d–t fusions per muon (average). Deuterium–tritium (d–t or dt) In the muon-catalyzed fusion of most interest, a positively charged deuteron (d), a positively charged triton (t), and a muon essentially form a positively charged muonic molecular heavy hydrogen ion (d–μ–t)+. The muon, with a rest mass 207 times greater than the rest mass of an electron, is able to drag the more massive triton and deuteron 207 times closer together to each other in the muonic (d–μ–t)+ molecular ion than can an electron in the corresponding electronic (d–e–t)+ molecular ion. The average separation between the triton and the deuteron in the electronic molecular ion is about one angstrom (100 pm), so the average separation between the triton and the deuteron in the muonic molecular ion is 207 times smaller than that. Due to the strong nuclear force, whenever the triton and the deuteron in the muonic molecular ion happen to get even closer to each other during their periodic vibrational motions, the probability is very greatly enhanced that the positively charged triton and the positively charged deuteron would undergo quantum tunnelling through the repulsive Coulomb barrier that acts to keep them apart. Indeed, the quantum mechanical tunnelling probability depends roughly exponentially on the average separation between the triton and the deuteron, allowing a single muon to catalyze the d–t nuclear fusion in less than about half a picosecond, once the muonic molecular ion is formed. The formation time of the muonic molecular ion is one of the "rate-limiting steps" in muon-catalyzed fusion that can easily take up to ten thousand or more picoseconds in a liquid molecular deuterium and tritium mixture (D2, DT, T2), for example. Each catalyzing muon thus spends most of its ephemeral existence of 2.2 microseconds, as measured in its rest frame, wandering around looking for suitable deuterons and tritons with which to bind. Another way of looking at muon-catalyzed fusion is to try to visualize the ground state orbit of a muon around either a deuteron or a triton. Suppose the muon happens to have fallen into an orbit around a deuteron initially, which it has about a 50% chance of doing if there are approximately equal numbers of deuterons and tritons present, forming an electrically neutral muonic deuterium atom (d–μ)0 that acts somewhat like a "fat, heavy neutron" due both to its relatively small size (again, 207 times smaller than an electrically neutral electronic deuterium atom (d–e)0) and to the very effective "shielding" by the muon of the positive charge of the proton in the deuteron. Even so, the muon still has a much greater chance of being transferred to any triton that comes near enough to the muonic deuterium than it does of forming a muonic molecular ion. The electrically neutral muonic tritium atom (t–μ)0 thus formed will act somewhat like an even "fatter, heavier neutron," but it will most likely hang on to its muon, eventually forming a muonic molecular ion, most likely due to the resonant formation of a hyperfine molecular state within an entire deuterium molecule D2 (d=e2=d), with the muonic molecular ion acting as a "fatter, heavier nucleus" of the "fatter, heavier" neutral "muonic/electronic" deuterium molecule ([d–μ–t]=e2=d), as predicted by Vesman, an Estonian graduate student, in 1967. Once the muonic molecular ion state is formed, the shielding by the muon of the positive charges of the proton of the triton and the proton of the deuteron from each other allows the triton and the deuteron to tunnel through the Coulomb barrier in time span of order of a nanosecond The muon survives the d–t muon-catalyzed nuclear fusion reaction and remains available (usually) to catalyze further d–t muon-catalyzed nuclear fusions. Each exothermic d–t nuclear fusion releases about 17.6 MeV of energy in the form of a "very fast" neutron having a kinetic energy of about 14.1 MeV and an alpha particle α (a helium-4 nucleus) with a kinetic energy of about 3.5 MeV. An additional 4.8 MeV can be gleaned by having the fast neutrons moderated in a suitable "blanket" surrounding the reaction chamber, with the blanket containing lithium-6, whose nuclei, known by some as "lithions," readily and exothermically absorb thermal neutrons, the lithium-6 being transmuted thereby into an alpha particle and a triton. Deuterium–deuterium and other types The first kind of muon–catalyzed fusion to be observed experimentally, by L.W. Alvarez et al., was protium (H or 1H1) and deuterium (D or 1H2) muon-catalyzed fusion. The fusion rate for p–d (or pd) muon-catalyzed fusion has been estimated to be about a million times slower than the fusion rate for d–t muon-catalyzed fusion. Of more practical interest, deuterium–deuterium muon-catalyzed fusion has been frequently observed and extensively studied experimentally, in large part because deuterium already exists in relative abundance and, like protium, deuterium is not at all radioactive. (Tritium rarely occurs naturally, and is radioactive with a half-life of about 12.5 years.) The fusion rate for d–d muon-catalyzed fusion has been estimated to be only about 1% of the fusion rate for d–t muon-catalyzed fusion, but this still gives about one d–d nuclear fusion every 10 to 100 picoseconds or so. However, the energy released with every d–d muon-catalyzed fusion reaction is only about 20% or so of the energy released with every d–t muon-catalyzed fusion reaction. Moreover, the catalyzing muon has a probability of sticking to at least one of the d–d muon-catalyzed fusion reaction products that Jackson in this 1957 paper estimated to be at least 10 times greater than the corresponding probability of the catalyzing muon sticking to at least one of the d–t muon-catalyzed fusion reaction products, thereby preventing the muon from catalyzing any more nuclear fusions. Effectively, this means that each muon catalyzing d–d muon-catalyzed fusion reactions in pure deuterium is only able to catalyze about one-tenth of the number of d–t muon-catalyzed fusion reactions that each muon is able to catalyze in a mixture of equal amounts of deuterium and tritium, and each d–d fusion only yields about one-fifth of the yield of each d–t fusion, thereby making the prospects for useful energy release from d–d muon-catalyzed fusion at least 50 times worse than the already dim prospects for useful energy release from d–t muon-catalyzed fusion. Potential "aneutronic" (or substantially aneutronic) nuclear fusion possibilities, which result in essentially no neutrons among the nuclear fusion products, are almost certainly not very amenable to muon-catalyzed fusion. One such essentially aneutronic nuclear fusion reaction involves a deuteron from deuterium fusing with a helion (He+2) from helium-3, which yields an energetic alpha particle and a much more energetic proton, both positively charged (with a few neutrons coming from inevitable d–d nuclear fusion side reactions). However, one muon with only one negative electric charge is incapable of shielding both positive charges of a helion from the one positive charge of a deuteron. The chances of the requisite two muons being present simultaneously are exceptionally remote. In culture The term "cold fusion" was coined to refer to muon-catalyzed fusion in a 1956 New York Times article about Luis W. Alvarez's paper. In 1957 Theodore Sturgeon wrote a novelette, "The Pod in the Barrier", in which humanity has ubiquitous cold fusion reactors that work with muons. The reaction is "When hydrogen one and hydrogen two are in the presence of Mu mesons, they fuse into helium three, with an energy yield in electron volts of 5.4 times ten to the fifth power". Unlike the thermonuclear bomb contained in the Pod (which is used to destroy the Barrier) they can become temporarily disabled by "concentrated disbelief" that muon fusion works. In Sir Arthur C. Clarke's third novel in the Space Odyssey series, 2061: Odyssey Three, muon-catalyzed fusion is the technology that allows mankind to achieve easy interplanetary travel. The main character, Heywood Floyd, compares Luis Alvarez to Lord Rutherford for underestimating the future potential of their discoveries. Notes References External links Web archive backup: Articles and presentation on this topic Web archive backup: Muon-catalyzed fusion diagram Nuclear fusion de:Kalte Fusion#Myonen-katalysierte Fusion
Muon-catalyzed fusion
[ "Physics", "Chemistry" ]
4,233
[ "Nuclear fusion", "Nuclear physics" ]
285,422
https://en.wikipedia.org/wiki/Baker%E2%80%93Campbell%E2%80%93Hausdorff%20formula
In mathematics, the Baker–Campbell–Hausdorff formula gives the value of that solves the equation for possibly noncommutative and in the Lie algebra of a Lie group. There are various ways of writing the formula, but all ultimately yield an expression for in Lie algebraic terms, that is, as a formal series (not necessarily convergent) in and and iterated commutators thereof. The first few terms of this series are: where "" indicates terms involving higher commutators of and . If and are sufficiently small elements of the Lie algebra of a Lie group , the series is convergent. Meanwhile, every element sufficiently close to the identity in can be expressed as for a small in . Thus, we can say that near the identity the group multiplication in —written as —can be expressed in purely Lie algebraic terms. The Baker–Campbell–Hausdorff formula can be used to give comparatively simple proofs of deep results in the Lie group–Lie algebra correspondence. If and are sufficiently small matrices, then can be computed as the logarithm of , where the exponentials and the logarithm can be computed as power series. The point of the Baker–Campbell–Hausdorff formula is then the highly nonobvious claim that can be expressed as a series in repeated commutators of and . Modern expositions of the formula can be found in, among other places, the books of Rossmann and Hall. History The formula is named after Henry Frederick Baker, John Edward Campbell, and Felix Hausdorff who stated its qualitative form, i.e. that only commutators and commutators of commutators, ad infinitum, are needed to express the solution. An earlier statement of the form was adumbrated by Friedrich Schur in 1890 where a convergent power series is given, with terms recursively defined. This qualitative form is what is used in the most important applications, such as the relatively accessible proofs of the Lie correspondence and in quantum field theory. Following Schur, it was noted in print by Campbell (1897); elaborated by Henri Poincaré (1899) and Baker (1902); and systematized geometrically, and linked to the Jacobi identity by Hausdorff (1906). The first actual explicit formula, with all numerical coefficients, is due to Eugene Dynkin (1947). The history of the formula is described in detail in the article of Achilles and Bonfiglioli and in the book of Bonfiglioli and Fulci. Explicit forms For many purposes, it is only necessary to know that an expansion for in terms of iterated commutators of and exists; the exact coefficients are often irrelevant. (See, for example, the discussion of the relationship between Lie group and Lie algebra homomorphisms in Section 5.2 of Hall's book, where the precise coefficients play no role in the argument.) A remarkably direct existence proof was given by Martin Eichler, see also the "Existence results" section below. In other cases, one may need detailed information about and it is therefore desirable to compute as explicitly as possible. Numerous formulas exist; we will describe two of the main ones (Dynkin's formula and the integral formula of Poincaré) in this section. Dynkin's formula Let G be a Lie group with Lie algebra . Let be the exponential map. The following general combinatorial formula was introduced by Eugene Dynkin (1947), where the sum is performed over all nonnegative values of and , and the following notation has been used: with the understanding that . The series is not convergent in general; it is convergent (and the stated formula is valid) for all sufficiently small and . Since , the term is zero if or if and . The first few terms are well-known, with all higher-order terms involving and commutator nestings thereof (thus in the Lie algebra): The above lists all summands of order 6 or lower (i.e. those containing 6 or fewer 's and 's). The (anti-)/symmetry in alternating orders of the expansion, follows from . A complete elementary proof of this formula can be found in the article on the derivative of the exponential map. An integral formula There are numerous other expressions for , many of which are used in the physics literature. A popular integral formula is involving the generating function for the Bernoulli numbers, utilized by Poincaré and Hausdorff. Matrix Lie group illustration For a matrix Lie group the Lie algebra is the tangent space of the identity I, and the commutator is simply ; the exponential map is the standard exponential map of matrices, When one solves for Z in using the series expansions for and one obtains a simpler formula: The first, second, third, and fourth order terms are: The formulas for the various 's is not the Baker–Campbell–Hausdorff formula. Rather, the Baker–Campbell–Hausdorff formula is one of various expressions for 's in terms of repeated commutators of and . The point is that it is far from obvious that it is possible to express each in terms of commutators. (The reader is invited, for example, to verify by direct computation that is expressible as a linear combination of the two nontrivial third-order commutators of and , namely and .) The general result that each is expressible as a combination of commutators was shown in an elegant, recursive way by Eichler. A consequence of the Baker–Campbell–Hausdorff formula is the following result about the trace: That is to say, since each with is expressible as a linear combination of commutators, the trace of each such terms is zero. Questions of convergence Suppose and are the following matrices in the Lie algebra (the space of matrices with trace zero): Then It is then not hard to show that there does not exist a matrix in with . (Similar examples may be found in the article of Wei.) This simple example illustrates that the various versions of the Baker–Campbell–Hausdorff formula, which give expressions for in terms of iterated Lie-brackets of and , describe formal power series whose convergence is not guaranteed. Thus, if one wants to be an actual element of the Lie algebra containing and (as opposed to a formal power series), one has to assume that and are small. Thus, the conclusion that the product operation on a Lie group is determined by the Lie algebra is only a local statement. Indeed, the result cannot be global, because globally one can have nonisomorphic Lie groups with isomorphic Lie algebras. Concretely, if working with a matrix Lie algebra and is a given submultiplicative matrix norm, convergence is guaranteed if Special cases If and commute, that is , the Baker–Campbell–Hausdorff formula reduces to . Another case assumes that commutes with both and , as for the nilpotent Heisenberg group. Then the formula reduces to its first three terms. This is the degenerate case used routinely in quantum mechanics, as illustrated below and is sometimes known as the disentangling theorem. In this case, there are no smallness restrictions on and . This result is behind the "exponentiated commutation relations" that enter into the Stone–von Neumann theorem. A simple proof of this identity is given below. Another useful form of the general formula emphasizes expansion in terms of Y and uses the adjoint mapping notation : which is evident from the integral formula above. (The coefficients of the nested commutators with a single are normalized Bernoulli numbers.) Now assume that the commutator is a multiple of , so that . Then all iterated commutators will be multiples of , and no quadratic or higher terms in appear. Thus, the term above vanishes and we obtain: Again, in this case there are no smallness restriction on and . The restriction on guarantees that the expression on the right side makes sense. (When we may interpret .) We also obtain a simple "braiding identity": which may be written as an adjoint dilation: Existence results If and are matrices, one can compute using the power series for the exponential and logarithm, with convergence of the series if and are sufficiently small. It is natural to collect together all terms where the total degree in and equals a fixed number , giving an expression . (See the section "Matrix Lie group illustration" above for formulas for the first several 's.) A remarkably direct and concise, recursive proof that each is expressible in terms of repeated commutators of and was given by Martin Eichler. Alternatively, we can give an existence argument as follows. The Baker–Campbell–Hausdorff formula implies that if and are in some Lie algebra defined over any field of characteristic 0 like or , then can formally be written as an infinite sum of elements of . [This infinite series may or may not converge, so it need not define an actual element in .] For many applications, the mere assurance of the existence of this formal expression is sufficient, and an explicit expression for this infinite sum is not needed. This is for instance the case in the Lorentzian construction of a Lie group representation from a Lie algebra representation. Existence can be seen as follows. We consider the ring of all non-commuting formal power series with real coefficients in the non-commuting variables and . There is a ring homomorphism from to the tensor product of with over , called the coproduct, such that and (The definition of Δ is extended to the other elements of S by requiring R-linearity, multiplicativity and infinite additivity.) One can then verify the following properties: The map , defined by its standard Taylor series, is a bijection between the set of elements of with constant term 0 and the set of elements of with constant term 1; the inverse of exp is log is grouplike (this means ) if and only if s is primitive (this means ). The grouplike elements form a group under multiplication. The primitive elements are exactly the formal infinite sums of elements of the Lie algebra generated by X and Y, where the Lie bracket is given by the commutator . (Friedrichs' theorem) The existence of the Campbell–Baker–Hausdorff formula can now be seen as follows: The elements X and Y are primitive, so and are grouplike; so their product is also grouplike; so its logarithm is primitive; and hence can be written as an infinite sum of elements of the Lie algebra generated by and . The universal enveloping algebra of the free Lie algebra generated by and is isomorphic to the algebra of all non-commuting polynomials in and . In common with all universal enveloping algebras, it has a natural structure of a Hopf algebra, with a coproduct . The ring used above is just a completion of this Hopf algebra. Zassenhaus formula A related combinatoric expansion that is useful in dual applications is where the exponents of higher order in are likewise nested commutators, i.e., homogeneous Lie polynomials. These exponents, in , follow recursively by application of the above BCH expansion. As a corollary of this, the Suzuki–Trotter decomposition follows. Campbell identity The following identity (Campbell 1897) leads to a special case of the Baker–Campbell–Hausdorff formula. Let be a matrix Lie group and its corresponding Lie algebra. Let be the linear operator on defined by for some fixed . (The adjoint endomorphism encountered above.) Denote with for fixed the linear transformation of given by . A standard combinatorial lemma which is utilized in producing the above explicit expansions is given by This is a particularly useful formula which is commonly used to conduct unitary transforms in quantum mechanics. By defining the iterated commutator, we can write this formula more compactly as, An application of the identity For central, i.e., commuting with both and , Consequently, for , it follows that whose solution is Taking gives one of the special cases of the Baker–Campbell–Hausdorff formula described above: More generally, for non-central , we have which can be written as the following braiding identity: Infinitesimal case A particularly useful variant of the above is the infinitesimal form. This is commonly written as This variation is commonly used to write coordinates and vielbeins as pullbacks of the metric on a Lie group. For example, writing for some functions and a basis for the Lie algebra, one readily computes that for the structure constants of the Lie algebra. The series can be written more compactly (cf. main article) as with the infinite series Here, is a matrix whose matrix elements are . The usefulness of this expression comes from the fact that the matrix is a vielbein. Thus, given some map from some manifold to some manifold , the metric tensor on the manifold can be written as the pullback of the metric tensor on the Lie group , The metric tensor on the Lie group is the Cartan metric, the Killing form. For a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric. Application in quantum mechanics A special case of the Baker–Campbell–Hausdorff formula is useful in quantum mechanics and especially quantum optics, where and are Hilbert space operators, generating the Heisenberg Lie algebra. Specifically, the position and momentum operators in quantum mechanics, usually denoted and , satisfy the canonical commutation relation: where is the identity operator. It follows that and commute with their commutator. Thus, if we formally applied a special case of the Baker–Campbell–Hausdorff formula (even though and are unbounded operators and not matrices), we would conclude that This "exponentiated commutation relation" does indeed hold, and forms the basis of the Stone–von Neumann theorem. Further, A related application is the annihilation and creation operators, and . Their commutator is central, that is, it commutes with both and . As indicated above, the expansion then collapses to the semi-trivial degenerate form: where is just a complex number. This example illustrates the resolution of the displacement operator, , into exponentials of annihilation and creation operators and scalars. This degenerate Baker–Campbell–Hausdorff formula then displays the product of two displacement operators as another displacement operator (up to a phase factor), with the resultant displacement equal to the sum of the two displacements, since the Heisenberg group they provide a representation of is nilpotent. The degenerate Baker–Campbell–Hausdorff formula is frequently used in quantum field theory as well. See also Matrix exponential Logarithm of a matrix Lie product formula (Trotter product formula) Lie group–Lie algebra correspondence Derivative of the exponential map Magnus expansion Stone–von Neumann theorem Golden–Thompson inequality Notes References Bibliography L. Corwin & F.P Greenleaf, Representation of nilpotent Lie groups and their applications, Part 1: Basic theory and examples, Cambridge University Press, New York, 1990, . Shlomo Sternberg (2004) Lie Algebras, Orange Grove Books, free, online Veltman, M, 't Hooft, G & de Wit, B (2007). "Lie Groups in Physics", online lectures. External links C.K. Zachos, Crib Notes on CBH expansions MathWorld page Lie groups Mathematical physics Combinatorics Exponentials
Baker–Campbell–Hausdorff formula
[ "Physics", "Mathematics" ]
3,243
[ "Discrete mathematics", "Mathematical structures", "Lie groups", "Applied mathematics", "Theoretical physics", "Combinatorics", "E (mathematical constant)", "Algebraic structures", "Exponentials", "Mathematical physics" ]
285,534
https://en.wikipedia.org/wiki/Supercooling
Supercooling, also known as undercooling, is the process of lowering the temperature of a liquid below its freezing point without it becoming a solid. Per the established international definition, supercooling means ‘cooling a substance below the normal freezing point without solidification’ While it can be achieved by different physical means, the postponed solidification is most often due to the absence of seed crystals or nuclei around which a crystal structure can form. The supercooling of water can be achieved without any special techniques other than chemical demineralization, down to . Supercooled water can occur naturally, for example in the atmosphere, animals or plants. Explanation A liquid crossing its standard freezing point will crystalize in the presence of a seed crystal or nucleus around which a crystal structure can form creating a solid. Lacking any such nuclei, the liquid phase can be maintained all the way down to the temperature at which crystal homogeneous nucleation occurs. Homogeneous nucleation can occur above the glass transition temperature, but if homogeneous nucleation has not occurred above that temperature, an amorphous (non-crystalline) solid will form. Water normally freezes at , but it can be "supercooled" at standard pressure down to its crystal homogeneous nucleation at almost . The process of supercooling requires water to be pure and free of nucleation sites, which can be achieved by processes like reverse osmosis or chemical demineralization, but the cooling itself does not require any specialised technique. If water is cooled at a rate on the order of 106 K/s, the crystal nucleation can be avoided and water becomes a glass—that is, an amorphous (non-crystalline) solid. Its glass transition temperature is much colder and harder to determine, but studies estimate it at about . Glassy water can be heated up to approximately without nucleation occurring. In the range of temperatures between , experiments find only crystal ice. Droplets of supercooled water often exist in stratus and cumulus clouds. An aircraft flying through such a cloud sees an abrupt crystallization of these droplets, which can result in the formation of ice on the aircraft's wings or blockage of its instruments and probes, unless the aircraft is equipped with an appropriate ice protection system. Freezing rain is also caused by supercooled droplets. The process opposite to supercooling, the melting of a solid above the freezing point, is much more difficult, and a solid will almost always melt at the same temperature for a given pressure. For this reason, it is the melting point which is usually identified, using melting point apparatus; even when the subject of a paper is "freezing-point determination", the actual methodology is "the principle of observing the disappearance rather than the formation of ice". It is possible, at a given pressure, to superheat a liquid above its boiling point without it becoming gaseous. Supercooling should not be confused with freezing-point depression. Supercooling is the cooling of a liquid below its freezing point without it becoming solid. Freezing point depression is when a solution can be cooled below the freezing point of the corresponding pure liquid due to the presence of the solute; an example of this is the freezing point depression that occurs when salt is added to pure water. Constitutional supercooling Constitutional supercooling, which occurs during solidification, is due to compositional solid changes, and results in cooling a liquid below the freezing point ahead of the solid–liquid interface. When solidifying a liquid, the interface is often unstable, and the velocity of the solid–liquid interface must be small in order to avoid constitutional supercooling. Constitutional supercooling is observed when the liquidus temperature gradient at the interface (the position x=0) is larger than the imposed temperature gradient: The liquidus slope from the binary phase diagram is given by , so the constitutional supercooling criterion for a binary alloy can be written in terms of the concentration gradient at the interface: The concentration gradient ahead of a planar interface is given by where is the interface velocity, the diffusion coefficient, and and are the compositions of the liquid and solid at the interface, respectively (i.e., ). For the steady-state growth of a planar interface, the composition of the solid is equal to the nominal alloy composition, , and the partition coefficient, , can be assumed constant. Therefore, the minimum thermal gradient necessary to create a stable solid front is given by For more information, see Chapter 3 of In animals In order to survive extreme low temperatures in certain environments, some animals use the phenomenon of supercooling that allow them to remain unfrozen and avoid cell damage and death. There are many techniques that aid in maintaining a liquid state, such as the production of antifreeze proteins, or AFPs, which bind to ice crystals to prevent water molecules from binding and spreading the growth of ice. The winter flounder is one such fish that utilizes these proteins to survive in its frigid environment. The liver secretes noncolligative proteins into the bloodstream. Other animals use colligative antifreezes, which increases the concentration of solutes in their bodily fluids, thus lowering their freezing point. Fish that rely on supercooling for survival must also live well below the water surface, because if they came into contact with ice nuclei they would freeze immediately. Animals that undergo supercooling to survive must also remove ice-nucleating agents from their bodies because they act as a starting point for freezing. Supercooling is also a common feature in some insect, reptile, and other ectotherm species. The potato cyst nematode larva (Globodera rostochiensis) could survive inside their cysts in a supercooled state to temperatures as low as , even with the cyst encased in ice. As an animal gets farther and farther below its melting point the chance of spontaneous freezing increases dramatically for its internal fluids, as this is a thermodynamically unstable state. The fluids eventually reach the supercooling point, which is the temperature at which the supercooled solution freezes spontaneously due to being so far below its normal freezing point. Animals unintentionally undergo supercooling and are only able to decrease the odds of freezing once supercooled. Even though supercooling is essential for survival, there are many risks associated with it. In plants Plants can also survive extreme cold conditions brought forth during the winter months. Many plant species located in northern climates can acclimate under these cold conditions by supercooling, thus these plants survive temperatures as low as . Although this supercooling phenomenon is poorly understood, it has been recognized through infrared thermography. Ice nucleation occurs in certain plant organs and tissues, debatably beginning in the xylem tissue and spreading throughout the rest of the plant. Infrared thermography allows for droplets of water to be visualized as they crystalize in extracellular spaces. Supercooling inhibits the formation of ice within the tissue by ice nucleation and allows the cells to maintain water in a liquid state and further allows the water within the cell to stay separate from extracellular ice. Cellular barriers such as lignin, suberin and the cuticle inhibit ice nucleators and force water into the supercooled tissue. The xylem and primary tissue of plants are very susceptible to cold temperatures because of the large proportion of water in the cell. Many boreal hardwood species in northern climates have the ability to prevent ice spreading into the shoots allowing the plant to tolerate the cold. Supercooling has been identified in the evergreen shrubs Rhododendron ferrugineum and Vaccinium vitis-idaea as well as Abies, Picea and Larix species. Freezing outside of the cell and within the cell wall does not affect the survival of the plant. However, the extracellular ice may lead to plant dehydration. In seawater The presence of salt in seawater affects the freezing point. For that reason, it is possible for seawater to remain in the liquid state at temperatures below melting point. This is "pseudo-supercooling" because the phenomenon is the result of freezing point lowering caused by the presence of salt, not supercooling. This condition is most commonly observed in the oceans around Antarctica where melting of the undersides of ice shelves at high-pressure results in liquid melt-water that can be below the freezing temperature. It is supposed that the water does not immediately refreeze due to a lack of nucleation sites. This provides a challenge to oceanographic instrumentation as ice crystals will readily form on the equipment, potentially affecting the data quality. Ultimately the presence of extremely cold seawater will affect the growth of sea ice. Applications One commercial application of supercooling is in refrigeration. Freezers can cool drinks to a supercooled level so that when they are opened, they form a slush. Another example is a product that can supercool the beverage in a conventional freezer. The Coca-Cola Company briefly marketed special vending machines containing Sprite in the UK, and Coke in Singapore, which stored the bottles in a supercooled state so that their content would turn to slush upon opening. Supercooling was successfully applied to organ preservation at Massachusetts General Hospital/Harvard Medical School. Livers that were later transplanted into recipient animals were preserved by supercooling for up to 4 days, quadrupling the limits of what could be achieved by conventional liver preservation methods. The livers were supercooled to a temperature of in a specialized solution that protected against freezing and injury from the cold temperature. Another potential application is drug delivery. In 2015, researchers crystallized membranes at a specific time. Liquid-encapsulated drugs could be delivered to the site and, with a slight environmental change, the liquid rapidly changes into a crystalline form that releases the drug. In 2016, a team at Iowa State University proposed a method for "soldering without heat" by using encapsulated droplets of supercooled liquid metal to repair heat sensitive electronic devices. In 2019, the same team demonstrated the use of undercooled metal to print solid metallic interconnects on surfaces ranging from polar (paper and Jello) to superhydrophobic (rose petals), with all the surfaces being lower modulus than the metal. Eftekhari et al. proposed an empirical theory explaining that supercooling of ionic liquid crystals can build ordered channels for diffusion for energy storage applications. In this case, the electrolyte has a rigid structure comparable to a solid electrolyte, but the diffusion coefficient can be as large as in liquid electrolytes. Supercooling increases the medium viscosity but keeps the directional channels open for diffusion. See also Amorphous solid Pumpable ice technology Subcooling Ultracold atom Viscous liquid Freezing rain Superheating References Further reading External links Supercooled liquids on arxiv.org Radiolab podcast on supercooling Thermodynamic processes Condensed matter physics Concepts in physics Glass physics
Supercooling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,323
[ "Glass engineering and science", "Thermodynamic processes", "Phases of matter", "Materials science", "Glass physics", "Thermodynamics", "Condensed matter physics", "nan", "Matter" ]
286,217
https://en.wikipedia.org/wiki/Flash%20evaporation
Flash evaporation (or partial evaporation) is the partial vapor that occurs when a saturated liquid stream undergoes a reduction in pressure by passing through a throttling valve or other throttling device. This process is one of the simplest unit operations. If the throttling valve or device is located at the entry into a pressure vessel so that the flash evaporation occurs within the vessel, then the vessel is often referred to as a flash drum. If the saturated liquid is a single-component liquid (for example, propane or liquid ammonia), a part of the liquid immediately "flashes" into vapor. Both the vapor and the residual liquid are cooled to the saturation temperature of the liquid at the reduced pressure. This is often referred to as "auto-refrigeration" and is the basis of most conventional vapor compression refrigeration systems. If the saturated liquid is a multi-component liquid (for example, a mixture of propane, isobutane and normal butane), the flashed vapor is richer in the more volatile components than is the remaining liquid. Uncontrolled flash evaporation can result in a boiling liquid expanding vapor explosion (BLEVE). Flash evaporation of a single-component liquid The flash evaporation of a single-component liquid is an isenthalpic process and is often referred to as an adiabatic flash. The following equation, derived from a simple heat balance around the throttling valve or device, is used to predict how much of a single-component liquid is vaporized. {| border="0" cellpadding="2" |- |align=right|where: |  |- !align=right| |align=left|=   weight ratio of vaporized liquid / total mass |- !align=right| |align=left|=  upstream liquid enthalpy at upstream temperature and pressure, J/kg |- !align=right|  |align=left|=  flashed vapor enthalpy at downstream pressure and corresponding saturation     temperature, J/kg |- !align=right|  |align=left|=  residual liquid enthalpy at downstream pressure and corresponding saturation     temperature, J/kg |} If the enthalpy data required for the above equation is unavailable, then the following equation may be used. {| border="0" cellpadding="2" |- |align=right|where: |  |- !align=right| |align=left|=  weight fraction vaporized |- !align=right| |align=left|=  liquid specific heat at upstream temperature and pressure, J/(kg °C) |- !align=right| |align=left|=  upstream liquid temperature, °C |- !align=right| |align=left|=  liquid saturation temperature corresponding to the downstream pressure, °C |- !align=right|  |align=left|=  liquid heat of vaporization at downstream pressure and corresponding saturation     temperature, J/kg |} Here, the words "upstream" and "downstream" refer to before and after the liquid passes through the throttling valve or device. This type of flash evaporation is used in the desalination of brackish water or ocean water by "Multi-Stage Flash Distillation." The water is heated and then routed into a reduced-pressure flash evaporation "stage" where some of the water flashes into steam. This steam is subsequently condensed into salt-free water. The residual salty liquid from that first stage is introduced into a second flash evaporation stage at a pressure lower than the first stage pressure. More water is flashed into steam which is also subsequently condensed into more salt-free water. This sequential use of multiple flash evaporation stages is continued until the design objectives of the system are met. A large part of the world's installed desalination capacity uses multi-stage flash distillation. Typically such plants have 24 or more sequential stages of flash evaporation. Equilibrium flash of a multi-component liquid The equilibrium flash of a multi-component liquid may be visualized as a simple distillation process using a single equilibrium stage. It is very different and more complex than the flash evaporation of single-component liquid. For a multi-component liquid, calculating the amounts of flashed vapor and residual liquid in equilibrium with each other at a given temperature and pressure requires a trial-and-error iterative solution. Such a calculation is commonly referred to as an equilibrium flash calculation. It involves solving the Rachford-Rice equation: where: zi is the mole fraction of component i in the feed liquid (assumed to be known); β is the fraction of feed that is vaporised; Ki is the equilibrium constant of component i. The equilibrium constants Ki are in general functions of many parameters, though the most important is arguably temperature; they are defined as: where: xi is the mole fraction of component i in liquid phase; yi is the mole fraction of component i in gas phase. Once the Rachford-Rice equation has been solved for β, the compositions xi and yi can be immediately calculated as: The Rachford-Rice equation can have multiple solutions for β, at most one of which guarantees that all xi and yi will be positive. In particular, if there is only one β for which: then that β is the solution; if there are multiple such β'''s, it means that either Kmax<1 or Kmin>1, indicating respectively that no gas phase can be sustained (and therefore β=0) or conversely that no liquid phase can exist (and therefore β=1). It is possible to use Newton's method for solving the above water equation, but there is a risk of converging to the wrong value of β; it is important to initialise the solver to a sensible initial value, such as (βmax+βmin'')/2 (which is however not sufficient: Newton's method makes no guarantees on stability), or, alternatively, use a bracketing solver such as the bisection method or the Brent method, which are guaranteed to converge but can be slower. The equilibrium flash of multi-component liquids is very widely utilized in petroleum refineries, petrochemical and chemical plants and natural gas processing plants. Contrast with spray drying Spray drying is sometimes seen as a form of flash evaporation. However, although it is a form of liquid evaporation, it is quite different from flash evaporation. In spray drying, a slurry of very small solids is rapidly dried by suspension in a hot gas. The slurry is first atomized into very small liquid droplets which are then sprayed into a stream of hot dry air. The liquid rapidly evaporates leaving behind dry powder or dry solid granules. The dry powder or solid granules are recovered from the exhaust air by using cyclones, bag filters or electrostatic precipitators. Natural flash evaporation Natural flash vaporization or flash deposition may occur during earthquakes resulting in deposition of minerals held in supersaturated solutions, sometimes even valuable ore in the case of auriferous, gold-bearing, waters. This results when blocks of rock are rapidly pulled and pushed away from each other by jog faults. See also Evaporator Vapor–liquid separator Multi-stage flash distillation References External links Vapor and Flash Steam Animation, photos and technical explanation of the difference between Flash Steam and Vaporized fraction. Flash Steam Tutorial The benefits of recovering flash steam, how it is done and typical applications. Water Desalination Technologies in the Middle East and Western Asia Discussion of spray drying Flash evaporation program online Flash distillation of the hydrocarbon compounds. Gas-liquid separation Evaporators
Flash evaporation
[ "Chemistry", "Engineering" ]
1,605
[ "Separation processes by phases", "Chemical equipment", "Distillation", "Evaporators", "Gas-liquid separation" ]
286,245
https://en.wikipedia.org/wiki/Sodium%20silicate
Sodium silicate is a generic name for chemical compounds with the formula or ·, such as sodium metasilicate (), sodium orthosilicate (), and sodium pyrosilicate (). The anions are often polymeric. These compounds are generally colorless transparent solids or white powders, and soluble in water in various amounts. Sodium silicate is also the technical and common name for a mixture of such compounds, chiefly the metasilicate, also called waterglass, water glass, or liquid glass. The product has a wide variety of uses, including the formulation of cements, coatings, passive fire protection, textile and lumber processing, manufacture of refractory ceramics, as adhesives, and in the production of silica gel. The commercial product, available in water solution or in solid form, is often greenish or blue owing to the presence of iron-containing impurities. In industry, the various grades of sodium silicate are characterized by their SiO2:Na2O weight ratio (which can be converted to molar ratio by multiplication with 1.032). The ratio can vary between 1:2 and 3.75:1. Grades with ratio below 2.85:1 are termed alkaline. Those with a higher SiO2:Na2O ratio are described as neutral. History Soluble silicates of alkali metals (sodium or potassium) were observed by European alchemists in the 16th century. Giambattista della Porta observed in 1567 that tartari salis (cream of tartar, potassium bitartrate) caused powdered crystallum (quartz) to melt at a lower temperature. Other possible early references to alkali silicates were made by Basil Valentine in 1520, and by Agricola in 1550. Around 1640, Jan Baptist van Helmont reported the formation of alkali silicates as a soluble substance made by melting sand with excess alkali, and observed that the silica could be precipitated quantitatively by adding acid to the solution. In 1646, Glauber made potassium silicate, which he called liquor silicum, by melting potassium carbonate (obtained by calcinating cream of tartar) and sand in a crucible, and keeping it molten until it ceased to bubble (due to the release of carbon dioxide). The mixture was allowed to cool and then was ground to a fine powder. When the powder was exposed to moist air, it gradually formed a viscous liquid, which Glauber called "Oleum oder Liquor Silicum, Arenæ, vel Crystallorum" (i.e., oil or solution of silica, sand or quartz crystal). However, it was later claimed that the substances prepared by those alchemists were not waterglass as it is understood today. That would have been prepared in 1818 by Johann Nepomuk von Fuchs, by treating silicic acid with an alkali; the result being soluble in water, "but not affected by atmospheric changes". The terms "water glass" and "soluble glass" were used by Leopold Wolff in 1846, by Émile Kopp in 1857, and by Hermann Krätzer in 1887. In 1892, Rudolf Von Wagner distinguished soda, potash, double (soda and potash), and fixing (i.e., stabilizing) as types of water glass. The fixing type was "a mixture of silica well saturated with potash water glass and a sodium silicate" used to stabilize inorganic water color pigments on cement work for outdoor signs and murals. Properties Sodium silicates are colorless glassy or crystalline solids, or white powders. Except for the most silicon-rich ones, they are readily soluble in water, producing alkaline solutions. When dried up it still can be rehydrated in water. Sodium silicates are stable in neutral and alkaline solutions. In acidic solutions, the silicate ions react with hydrogen ions to form silicic acids, which tend to decompose into hydrated silicon dioxide gel. Heated to drive off the water, the result is a hard translucent substance called silica gel, widely used as a desiccant. It can withstand temperatures up to 1100 °C. Production Solutions of sodium silicates can be produced by treating a mixture of silica (usually as quartz sand), caustic soda, and water, with hot steam in a reactor. The overall reaction is 2x NaOH + → · + x Sodium silicates can also be obtained by dissolving silica (whose melting point is 1713 °C) in molten sodium carbonate (that melts with decomposition at 851 °C): x + → · + The material can be obtained also from sodium sulfate (melting point 884 °C) with carbon as a reducing agent: 2x + C + 2 → 2 · + 2 + In 1990, 4 million tons of alkali metal silicates were produced. Ferrosilicon Sodium silicate may be produced as a part of hydrogen production by dissolving ferrosilicon in an aqueous sodium hydroxide (NaOH·H2O) solution: 2NaOH + Si + H2O → 2Na2SiO3 + 2H2 Bayer process Though unprofitable, Na2SiO3 is a byproduct of Bayer process which is often converted to calcium silicate (Ca2SiO4). Uses The main applications of sodium silicates are in detergents, paper industry (as a deinking agent), water treatment, and construction materials. Adhesives The adhesive properties of sodium silicate were noted as early as the 1850s and have been widely used at least since the First World War. The largest application of sodium silicate solutions is a cement for producing cardboard. When used as a paper cement, the sodium silicate joint tends to crack within a few years, at which point it no longer holds the paper surfaces cemented together. Sodium silicate solutions can also be used as a spin-on adhesive layer to bond glass to glass or a silicon dioxide–covered silicon wafer to one another. Sodium silicate glass-to-glass bonding has the advantage that it is a low-temperature bonding technique, as opposed to fusion bonding. It also requires less processing than glass-to-glass anodic bonding, which requires an intermediate layer such as silicon nitride (SiN) to act as a diffusion barrier for sodium ions. The deposition of such a layer requires a low-pressure chemical vapor deposition step. A disadvantage of sodium silicate bonding, however, is that it is very difficult to eliminate air bubbles. This is in part because the technique does not require a vacuum and also does not use field assistance as in anodic bonding. This lack of field assistance can sometimes be beneficial, because field assistance can provide such high attraction between wafers as to bend a thinner wafer and collapse onto nanofluidic cavity or MEMS elements. Coatings Sodium silicate may be used for various paints and coatings, such as those used on welding rods. Such coatings can be cured in two ways. One method is to heat a thin layer of sodium silicate into a gel and then into a hard film. To make the coating water-resistant, high temperatures of are needed. The temperature is slowly raised to to dehydrate the film and avoid steaming and blistering. The process must be relatively slow, but infrared lamps may be used at first. In the other method, when high temperatures are not practical, the water resistance may be achieved by chemicals (or esters), such as boric acid, phosphoric acid, sodium fluorosilicate, and aluminium phosphate. Before application, an aqueous solution of sodium silicate is mixed with a curing agent. It is used in detergent auxiliaries such as complex sodium disilicate and modified sodium disilicate. The detergent granules gain their ruggedness from a coating of silicates. Water treatment Sodium silicate is used as an alum coagulant and an iron flocculant in wastewater treatment plants. Sodium silicate binds to colloidal molecules, creating larger aggregates that sink to the bottom of the water column. The microscopic negatively charged particles suspended in water interact with sodium silicate. Their electrical double layer collapses due to the increase of ionic strength caused by the addition of sodium silicate (doubly negatively charged anion accompanied by two sodium cations) and they subsequently aggregate. This process is called coagulation. Foundries, refractories and pottery It is used as a binder of the sand when doing sand casting of all common metals. It allows for the rapid production of a strong mold or core by three main methods. Method 1 requires passing carbon dioxide gas through the mixture of sand and sodium silicate in the sand molding box or core box. The carbon dioxide reacts with the sodium silicate to form solid silica gel and sodium carbonate. This provides adequate strength to remove the now hardened sand shape from the forming tool. Additional strength occurs as any unreacted sodium silicate in the sand shape dehydrates. Method 2 requires adding an ester (reaction product of an acid and an alcohol) to the mixture of sand and sodium silicate before it is placed into the molding box or core box. As the ester hydrolyzes from the water in the liquid sodium silicate, an acid is released which causes the liquid sodium silicate to gel. Once the gel has formed, it will dehydrate to a glassy phase as a result of syneresis. Commonly used esters include acetate esters of glycerol and ethylene glycol as well as carbonate esters of propylene and ethylene glycol. The higher the water solubility of the ester, the faster the hardening of the sand. Method 3 requires microwave energy to heat and dehydrate the mixture of sand and sodium silicate in the sand molding box or core box. The forming tools must pass through microwaves for this to work well. Because sodium silicate has a high dielectric constant, it absorbs microwave energy very rapidly. Fully dehydrated sand shapes can be produced within a minute of microwave exposure. This method produces the highest strength of sand shapes bonded with sodium silicate. Since the sodium silicate does not burn during casting (it can actually melt at pouring temperatures above 1800 °F), it is common to add organic materials to provide for enhanced sand breakdown after casting. The additives include sugar, starch, carbons, wood flour and phenolic resins. Water glass is a useful binder for solids, such as vermiculite and perlite. When blended with the latter lightweight fraction, water glass can be used to make hard, high-temperature insulation boards used for refractories, passive fire protection, and high-temperature insulations, such as in moulded pipe insulation applications. When mixed with finely divided mineral powders, such as vermiculite dust (which is common scrap from the exfoliation process), one can produce high temperature adhesives. The intumescence disappears in the presence of finely divided mineral dust, whereby the waterglass becomes a mere matrix. Waterglass is inexpensive and abundantly available, which makes its use popular in many refractory applications. Sodium silicate is used as a deflocculant in casting slips helping reduce viscosity and the need for large amounts of water to liquidize the clay body. It is also used to create a crackle effect in pottery, usually wheel-thrown. A vase or bottle is thrown on the wheel, fairly narrow and with thick walls. Sodium silicate is brushed on a section of the piece. After five minutes, the wall of the piece is stretched outward with a rib or hand. The result is a wrinkled or cracked look. It is also the main agent in "magic water", which is used when joining clay pieces, especially if the moisture level of the two differs. Dyes Sodium silicate solution is used as a fixative for hand dyeing with reactive dyes that require a high pH to react with the textile fiber. After the dye is applied to a cellulose-based fabric, such as cotton or rayon, or onto silk, it is allowed to dry, after which the sodium silicate is painted on to the dyed fabric, covered with plastic to retain moisture, and left to react for an hour at room temperature. Repair work Sodium silicate is used, along with magnesium silicate, in muffler repair and fitting paste. Magnesium silicate can be mixed with a solution of sodium silicate to form a thick paste that is easy to apply. When the exhaust system of an internal combustion engine heats up to its operating temperature, the heat drives out all of the excess water from the paste. The silicate compounds that are left over have glass-like properties, making a temporary, brittle repair that can be reinforced with glass fibre. Sodium silicate can be used to fill gaps in the head gasket of an engine. This is especially useful for aluminium alloy cylinder heads, which are sensitive to thermally induced surface deflection. Sodium silicate is added to the cooling system through the radiator and allowed to circulate. When the sodium silicate reaches its "conversion" temperature of , it loses water molecules and forms a glass seal with a re-melting temperature above . This repair can last two years or longer, and symptoms disappear instantly. However, this repair works only when the sodium silicate reaches its "conversion" temperature. Also, sodium silicate (glass particulate) contamination of lubricants is detrimental to their function, and contamination of engine oil is a serious possibility in situations in which a coolant-to-oil leak is present. Sodium silicate solution is used to inexpensively, quickly, and permanently disable automobile engines. Running an engine with half a U.S. gallon (or about two liters) of a sodium silicate solution instead of motor oil causes the solution to precipitate, catastrophically damaging the engine's bearings and pistons within a few minutes. In the United States, this procedure was used to comply with requirements of the Car Allowance Rebate System (CARS) program. Construction A mixture of sodium silicate and sawdust has been used in between the double skin of certain safes. This not only makes them more fire resistant, but also makes cutting them open with an oxyacetylene torch extremely difficult due to the smoke emitted. Sodium silicate is frequently used in drilling fluids to stabilize and avoid the collapse of borehole walls. It is particularly useful when drill holes pass through argillaceous formations containing swelling clay minerals such as smectite or montmorillonite. Concrete treated with a sodium silicate solution helps to reduce porosity in most masonry products such as concrete, stucco, and plasters. This effect aids in reducing water penetration, but has no known effect on reducing water vapor transmission and emission. A chemical reaction occurs with the excess Ca(OH)2 (portlandite) present in the concrete that permanently binds the silicates with the surface, making them far more durable and water repellent. This treatment generally is applied only after the initial cure has taken place (approximately seven days depending on conditions). These coatings are known as silicate mineral paint. An example of the reaction of sodium silicate with the calcium hydroxide found in concrete to form calcium silicate hydrate (CSH) gel, the main product in hydrated Portland cement, follows. + + → + Crystal gardens When crystals of a number of metallic salts are dropped into a solution of water glass, simple or branching stalagmites of colored metal silicates are formed. This phenomenon has been used by manufacturers of toys and chemistry sets to provide instructive enjoyment to many generations of children from the early 20th century until the present. An early mention of crystals of metallic salts forming a "chemical garden" in sodium silicate is found in the 1946 Modern Mechanix magazine. Metal salts used included the sulfates and/or chlorides of copper, cobalt, iron, nickel, and manganese. Sealants Sodium silicate with additives was injected into the ground to harden it and thereby to prevent further leakage of highly radioactive water from the Fukushima Daiichi nuclear power plant in Japan in April, 2011. The residual heat carried by the water used for cooling the damaged reactors accelerated the setting of the injected mixture. On June 3, 1958, the USS Nautilus, the world's first nuclear submarine, visited Everett and Seattle. In Seattle, crewmen dressed in civilian clothing were sent in to secretly buy 140 quarts (160 liters) of an automotive product containing sodium silicate (originally identified as Stop Leak) to repair a leaking condenser system. The Nautilus was en route to the North Pole on a top secret mission to cross the North Pole submerged. Firearms A historical use of the adhesive properties of sodium silicates is the production of paper cartridges for black powder revolvers produced by Colt's Manufacturing Company between 1851 and 1873, especially during the American Civil War. Sodium silicate was used to seal combustible nitrated paper together to form a conical paper cartridge to hold the black powder, as well as to cement the lead ball or conical bullet into the open end of the paper cartridge. Such sodium silicate cemented paper cartridges were inserted into the cylinders of revolvers, thereby speeding the reloading of cap-and-ball black powder revolvers. This use largely ended with the introduction of Colt revolvers employing brass-cased cartridges starting in 1873. Similarly, sodium silicate was also used to cement the top wad into brass shotgun shells, thereby eliminating any need for a crimp at the top of the brass shotgun shell to hold a shotgun shell together. Reloading brass shotgun shells was widely practiced by self-reliant American farmers during the 1870s, using the same waterglass material that was also used to preserve eggs. The cementing of the top wad on a shotgun shell consisted of applying from three to five drops of waterglass on the top wad to secure it to the brass hull. Brass hulls for shotgun shells were superseded by paper hulls starting around 1877. The newer paper-hulled shotgun shells used a roll crimp in place of a waterglass-cemented joint to hold the top wad in the shell. However, whereas brass shotshells with top wads cemented with waterglass could be reloaded nearly indefinitely (given powder, wad, and shot, of course), the paper hulls that replaced the brass hulls could be reloaded only a few times. Food and medicine Sodium silicate and other silicates are the primary components in "instant" wrinkle remover creams, which temporarily tighten the skin to minimize the appearance of wrinkles and under-eye bags. These creams, when applied as a thin film and allowed to dry for a few minutes, can present dramatic results. This effect is not permanent, lasting from a few minutes up to a couple of hours. It works like water cement, once the muscle starts to move, it cracks and leaves white residues on the skin. Waterglass has been used as an egg preservative with large success, primarily when refrigeration is not available. Fresh-laid eggs are immersed in a solution of sodium silicate (waterglass). After being immersed in the solution, they are removed and allowed to dry. A permanent air tight coating remains on the eggs. If they are then stored in appropriate environment, the majority of bacteria which would otherwise cause them to spoil are kept out and their moisture is kept in. According to the cited source, treated eggs can be kept fresh using this method for up to five months. When boiling eggs preserved that way, the shell is no longer permeable to air, and the egg will tend to crack unless a hole in the shell is made (e.g., with a pin) in order to allow steam to escape. Sodium silicate's flocculant properties are also used to clarify wine and beer by precipitating colloidal particles. As a clearing agent, though, sodium silicate is sometimes confused with isinglass which is prepared from collagen extracted from the dried swim bladders of sturgeon and other fishes. Eggs can be preserved in a bucket of waterglass gel, and their shells are sometimes also used (baked and crushed) to clear wine. Sodium silicate gel is also used as a substrate for algal growth in aquaculture hatcheries. See also Precipitated silica Sodium carbonate Sodium stannate Sodium germanate References Further reading Ashford's Dictionary of Industrial Chemicals, third edition, 2011, page 8369. External links Centre Européen d'Etudes des Silicates International Chemical Safety Card 1137 ChemSub Online : Silicic acid, sodium salt ChemSub Online : Sodium metasilicate Cement Concrete Drilling technology Glass compositions Inorganic silicon compounds Sodium compounds Silicates
Sodium silicate
[ "Chemistry", "Engineering" ]
4,419
[ "Structural engineering", "Glass chemistry", "Inorganic compounds", "Glass compositions", "Concrete", "Inorganic silicon compounds" ]
286,322
https://en.wikipedia.org/wiki/Crash%20test
A crash test is a form of destructive testing usually performed in order to ensure safe design standards in crashworthiness and crash compatibility for various modes of transportation (see automobile safety) or related systems and components. Types Frontal-impact tests: which is what most people initially think of when asked about a crash test. Vehicles usually impact a solid concrete wall at a specified speed, but these can also be vehicle impacting vehicle tests. SUVs have been singled out in these tests for a while, due to the high ride-height that they often have. Moderate Overlap tests: in which only part of the front of the car impacts with a barrier (vehicle). These are important, as impact forces (approximately) remain the same as with a frontal impact test, but a smaller fraction of the car is required to absorb all of the force. These tests are often realized by cars turning into oncoming traffic. This type of testing is done by the U.S.A. Insurance Institute for Highway Safety (IIHS), Euro NCAP, Australasian New Car Assessment Program (ANCAP) and ASEAN NCAP. Small Overlap tests: this is where only a small portion of the car's structure strikes an object such as a pole or a tree, or if a car were to clip another car. This is the most demanding test because it loads the most force onto the structure of the car at any given speed. These are usually conducted at 15–20% of the front vehicle structure. Side-impact tests: these forms of accidents have a very significant likelihood of fatality, as cars do not have a significant crumple zone to absorb the impact forces before an occupant is injured. Pole-impact tests: A difficult test which places a large amount of force on a small proportion on the side of the vehicle. Roll-over tests: which tests a car's ability (specifically the pillars holding the roof) to support itself in a dynamic impact. More recently, dynamic rollover tests have been proposed in lieu of static crush testing (video). Roadside hardware crash tests: are used to ensure crash barriers and crash cushions will protect vehicle occupants from roadside hazards, and also to ensure that guard rails, sign posts, light poles and similar appurtenances do not pose an undue hazard to vehicle occupants. Old versus new: Often an old and big car against a small and new car, or two different generations of the same car model. These tests are performed to show the advancements in crash-worthiness. Computer model: Because of the cost of full-scale crash tests, engineers often run many simulated crash tests using computer models to refine their vehicle or barrier designs before conducting live tests. Sled testing: A cost-effective way of testing components such as airbags and seat belts is conducting sled crash testing. The two most common types of sled systems are reverse-firing sleds which are fired from a standstill, and decelerating sleds which are accelerated from a starting point and stopped in the crash area with a hydraulic ram. It can also be used to evaluate the whiplash protection of a vehicle's seat. Major providers Auto Review Car Assessment Program (ARCAP) Allgemeiner Deutscher Automobil-Club (ADAC) in Germany National Highway Traffic Safety Administration (NHTSA) in the United States, specifically the Federal Motor Vehicle Safety Standard (FMVSS) and New Car Assessment Program (NCAP) Data collection Crash tests are conducted under rigorous scientific and safety standards. Each crash test is very expensive so the maximum amount of data must be extracted from each test. Usually, this requires the use of high-speed data-acquisition, at least one triaxial accelerometer and a crash test dummy, but often includes more. Some organizations that conduct crash tests include Calspan, an independent test laboratory in Buffalo, NY. As a result of the capabilities and expertise at Calspan, Calspan has been awarded 5 year contracts by the National Highway Traffic Safety Administration (NHTSA) to execute for the NHTSA FMVSS No. 214, Side Impact Protection Compliance Testing, FMVSS No. 301 Fuel System Integrity, and FMVSS No. 305 Electric Powered Vehicles: Electrolyte Spillage and Electrical Shock Protection vehicle crash tests. Calspan also holds the NHTSA contracts for executing New Car Assessment Program crash tests. Also, the Monash University department of civil engineering routinely conducts crash tests for the purposes of roadside barrier safety and design. Consumer response In 1998 the Rover 100 received a one-star Adult Occupant Rating in EuroNCAP crash tests; sales promptly collapsed and the 18-year-old design was quickly scrapped. In 2005 the Daewoo Kalos made news in Europe and Australia by scoring only two stars in its crash test, resulting in lower sales and demonstrating the influence of vehicle crashworthiness on a model's success in the marketplace. The result for Holden in Australia, who retailed the Kalos under the Holden Barina name, resulted in a considerable amount of negative publicity, with the managing director of Holden forced to publicly defend the vehicle. The second generation Isuzu Trooper (1995–1997) models were rated "Not Acceptable" by Consumer Reports for their tendency to roll over during testing. After the report Trooper sales never recovered and two years later production ceased. Crash testing programs There are a number of crash test programs around the world dedicated to providing consumers with a source of comparitative information in relation to the safety performance of new and used vehicles. Examples of new car crash test programs include National Highway Traffic Safety Administration's NCAP, the Insurance Institute for Highway Safety, Australasian New Car Assessment Program, EuroNCAP and JapNCAP. Programs such as the Used Car Safety Ratings provide consumers information on the safety performance of vehicles based on real world crash data. In 2020, EuroNCAP introduces a mobile progressive deformable barrier (MPDB) test first experimented on the Toyota Yaris. See also Air safety Automobile safety Automobile safety rating Car accident Crash test dummy Crashworthiness European New Car Assessment Programme (Euro NCAP) Head injury criterion Insurance Institute for Highway Safety Moose test Out of position (crash testing) NASA Impact Dynamics Research Facility References External links Automotive Safety and Bharat NCAP How Crash Testing Works at HowStuffWorks Insurance Institute of Highway Safety EuroNCAP Motorward: All you need to know about crash tests Mechanical tests Transport safety Product testing
Crash test
[ "Physics", "Engineering" ]
1,322
[ "Mechanical tests", "Transport safety", "Physical systems", "Transport", "Mechanical engineering" ]
286,466
https://en.wikipedia.org/wiki/Hydrothermal%20circulation
Hydrothermal circulation in its most general sense is the circulation of hot water (Ancient Greek ὕδωρ, water, and θέρμη, heat ). Hydrothermal circulation occurs most often in the vicinity of sources of heat within the Earth's crust. In general, this occurs near volcanic activity, but can occur in the shallow to mid crust along deeply penetrating fault irregularities or in the deep crust related to the intrusion of granite, or as the result of orogeny or metamorphism. Hydrothermal circulation often results in hydrothermal mineral deposits. Seafloor hydrothermal circulation Hydrothermal circulation in the oceans is the passage of the water through mid-oceanic ridge systems. The term includes both the circulation of the well-known, high-temperature vent waters near the ridge crests, and the much-lower-temperature, diffuse flow of water through sediments and buried basalts further from the ridge crests. The former circulation type is sometimes termed "active", and the latter "passive". In both cases, the principle is the same: Cold, dense seawater sinks into the basalt of the seafloor and is heated at depth whereupon it rises back to the rock-ocean water interface due to its lesser density. The heat source for the active vents is the newly formed basalt, and, for the highest temperature vents, the underlying magma chamber. The heat source for the passive vents is the still-cooling older basalts. Heat flow studies of the seafloor suggest that basalts within the oceanic crust take millions of years to completely cool as they continue to support passive hydrothermal circulation systems. Hydrothermal vents are locations on the seafloor where hydrothermal fluids mix into the overlying ocean. Perhaps the best-known vent forms are the naturally occurring chimneys referred to as black smokers. Volcanic and magma related hydrothermal circulation Hydrothermal circulation is not limited to ocean ridge environments. Hydrothermal circulating convection cells can exist in any place an anomalous source of heat, such as an intruding magma or volcanic vent, comes into contact with the groundwater system where permeability allows flow. This convection can manifest as hydrothermal explosions, geysers, and hot springs, although this is not always the case.   Hydrothermal circulation above magma bodies has been intensively studied in the context of geothermal projects where many deep wells are drilled into the system to produce and subsequently re-inject the hydrothermal fluids. The detailed data sets available from this work show the long term persistence of these systems, the development of fluid circulation patterns, histories that can be influenced by renewed magmatism, fault movement, or changes associated with hydrothermal brecciation and eruption sometimes followed by massive cold water invasion. Less direct but as intensive study has focused on the minerals deposited especially in the upper parts of hydrothermal circulation systems. Understanding volcanic and magma-related hydrothermal circulation means studying hydrothermal explosions, geysers, hot springs, and other related systems and their interactions with associated surface water and groundwater bodies. A good environment to observe this phenomenon is in volcanogenic lakes where hot springs and geysers are commonly present. The convection systems in these lakes work through cold lake water percolating downward through the permeable lake bed, mixing with groundwater heated by magma or residual heat, and rising to form thermal springs at discharge points. The existence of hydrothermal convection cells and hot springs or geysers in these environments depends not only on the presence of a colder water body and geothermal heat but also strongly depends on a no-flow boundary at the water table. These systems can develop their own boundaries. For example the water level represents a fluid pressure condition that leads to gas exsolution or boiling that in turn causes intense mineralization that can seal cracks. Deep crust Hydrothermal also refers to the transport and circulation of water within the deep crust, in general from areas of hot rocks to areas of cooler rocks. The causes for this convection can be: Intrusion of magma into the crust Radioactive heat generated by cooled masses of granite Heat from the mantle Hydraulic head from mountain ranges, for example, the Great Artesian Basin Dewatering of metamorphic rocks, which liberates water Dewatering of deeply buried sediments Hydrothermal circulation, in particular in the deep crust, is a primary cause of mineral deposit formation and a cornerstone of most theories on ore genesis. Hydrothermal ore deposits During the early 1900s, various geologists worked to classify hydrothermal ore deposits that they assumed formed from upward-flowing aqueous solutions. Waldemar Lindgren (1860–1939) developed a classification based on interpreted decreasing temperature and pressure conditions of the depositing fluid. His terms: "hypothermal", "mesothermal", "epithermal" and "teleothermal", expressed decreasing temperature and increasing distance from a deep source. Recent studies retain only the epithermal label. John Guilbert's 1985 revision of Lindgren's system for hydrothermal deposits includes the following: Ascending hydrothermal fluids, magmatic or meteoric water Porphyry copper and other deposits, 200–800 °C, moderate pressure Igneous metamorphic, 300–800 °C, low to moderate pressure Cordilleran veins, intermediate to shallow depths Epithermal, shallow to intermediate, 50–300 °C, low pressure Circulating heated meteoric solutions Mississippi Valley-type deposits, 25–200 °C, low pressure Western US uranium, 25–75 °C, low pressure Circulating heated seawater Oceanic ridge deposits, 25–300 °C, low pressure See also Volcanogenic massive sulfide ore deposit Geothermal gradient Hydrothermal synthesis References Geological processes Physical oceanography
Hydrothermal circulation
[ "Physics" ]
1,144
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
286,534
https://en.wikipedia.org/wiki/Truncated%20icosidodecahedron
In geometry, a truncated icosidodecahedron, rhombitruncated icosidodecahedron, great rhombicosidodecahedron, omnitruncated dodecahedron or omnitruncated icosahedron is an Archimedean solid, one of thirteen convex, isogonal, non-prismatic solids constructed by two or more types of regular polygon faces. It has 62 faces: 30 squares, 20 regular hexagons, and 12 regular decagons. It has the most edges and vertices of all Platonic and Archimedean solids, though the snub dodecahedron has more faces. Of all vertex-transitive polyhedra, it occupies the largest percentage (89.80%) of the volume of a sphere in which it is inscribed, very narrowly beating the snub dodecahedron (89.63%) and small rhombicosidodecahedron (89.23%), and less narrowly beating the truncated icosahedron (86.74%); it also has by far the greatest volume (206.8 cubic units) when its edge length equals 1. Of all vertex-transitive polyhedra that are not prisms or antiprisms, it has the largest sum of angles (90 + 120 + 144 = 354 degrees) at each vertex; only a prism or antiprism with more than 60 sides would have a larger sum. Since each of its faces has point symmetry (equivalently, 180° rotational symmetry), the truncated icosidodecahedron is a 15-zonohedron. Names The name great rhombicosidodecahedron refers to the relationship with the (small) rhombicosidodecahedron (compare section Dissection). There is a nonconvex uniform polyhedron with a similar name, the nonconvex great rhombicosidodecahedron. Area and volume The surface area A and the volume V of the truncated icosidodecahedron of edge length a are: If a set of all 13 Archimedean solids were constructed with all edge lengths equal, the truncated icosidodecahedron would be the largest. Cartesian coordinates Cartesian coordinates for the vertices of a truncated icosidodecahedron with edge length 2φ − 2, centered at the origin, are all the even permutations of: (±, ±, ±(3 + φ)), (±, ±φ, ±(1 + 2φ)), (±, ±φ2, ±(−1 + 3φ)), (±(2φ − 1), ±2, ±(2 + φ)) and (±φ, ±3, ±2φ), where φ =  is the golden ratio. Dissection The truncated icosidodecahedron is the convex hull of a rhombicosidodecahedron with cuboids above its 30 squares, whose height to base ratio is . The rest of its space can be dissected into nonuniform cupolas, namely 12 between inner pentagons and outer decagons and 20 between inner triangles and outer hexagons. An alternative dissection also has a rhombicosidodecahedral core. It has 12 pentagonal rotundae between inner pentagons and outer decagons. The remaining part is a toroidal polyhedron. Orthogonal projections The truncated icosidodecahedron has seven special orthogonal projections, centered on a vertex, on three types of edges, and three types of faces: square, hexagonal and decagonal. The last two correspond to the A2 and H2 Coxeter planes. Spherical tilings and Schlegel diagrams The truncated icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Schlegel diagrams are similar, with a perspective projection and straight edges. Geometric variations Within Icosahedral symmetry there are unlimited geometric variations of the truncated icosidodecahedron with isogonal faces. The truncated dodecahedron, rhombicosidodecahedron, and truncated icosahedron as degenerate limiting cases. Truncated icosidodecahedral graph In the mathematical field of graph theory, a truncated icosidodecahedral graph (or great rhombicosidodecahedral graph) is the graph of vertices and edges of the truncated icosidodecahedron, one of the Archimedean solids. It has 120 vertices and 180 edges, and is a zero-symmetric and cubic Archimedean graph. Related polyhedra and tilings This polyhedron can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedrons), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling. Notes References Cromwell, P.; Polyhedra, CUP hbk (1997), pbk. (1999). External links * Editable printable net of a truncated icosidodecahedron with interactive 3D view The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra Uniform polyhedra Archimedean solids Truncated tilings Zonohedra Individual graphs Planar graphs
Truncated icosidodecahedron
[ "Physics", "Mathematics" ]
1,174
[ "Symmetry", "Uniform polytopes", "Truncated tilings", "Tessellation", "Planar graphs", "Planes (geometry)", "Uniform polyhedra" ]
286,550
https://en.wikipedia.org/wiki/Safety-critical%20system
A safety-critical system or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes: death or serious injury to people loss or severe damage to equipment/property environmental harm A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom. Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based. Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation. Reliability regimens Several reliability regimes for safety-critical systems exist: Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the fail-deadly behavior of the Perimeter system built during the Soviet era. Fail-soft systems are able to continue operating on an interim basis with reduced efficiency in case of failure. Most spare tires are an example of this: They usually come with certain restrictions (e.g. a speed restriction) and lead to lower fuel economy. Another example is the "Safe Mode" found in most Windows operating systems. Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe. Fail-secure systems maintain maximum security when they cannot operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones will lock, keeping an area secure. Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing. Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. The computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion. Software engineering for safety-critical systems Software engineering for safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as Federal Aviation Administration requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors. Examples of safety-critical systems Infrastructure Circuit breaker Emergency services dispatch systems Electricity generation, transmission and distribution Fire alarm Fire sprinkler Fuse (electrical) Fuse (hydraulic) Life support systems Telecommunications Medicine The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients). Heart-lung machines Anesthetic machines Mechanical ventilation systems Infusion pumps and Insulin pumps Radiation therapy machines Robotic surgery machines Defibrillator machines Pacemaker devices Dialysis machines Devices that electronically monitor vital functions (electrography; especially, electrocardiography, ECG or EKG, and electroencephalography, EEG) Medical imaging devices (X-ray, computerized tomography- CT or CAT, different magnetic resonance imaging- MRI- techniques, positron emission tomography- PET) Even healthcare information systems have significant safety implications Nuclear engineering Nuclear reactor control systems Oil and gas production Process containment Well integrity Hull integrity (for floating production storage and offloading) Jacket and topside structures Lifting equipment Helidecks Mooring systems Fire and gas detection Critical instrumented functions (process shutdown, emergency shutdown) Actuated isolation valves Pressure relief devices Blowdown valves and flare system Drilling well control (blowout preventer, mud and cement) Ventilation and heating, ventilation, and air conditioning Drainage systems Ballast systems Hull cargo tanks inerting system Heading control Ignition prevention (Ex certified electrical equipment, insulated hot surfaces, etc.) Firewater pumps Firewater and foam distribution piping Firewater and foam monitors Deluge valves Gaseous fire suppression systems Firewater hydrants Passive fire protection Temporary Refuge Escape routes Lifeboats and liferafts Personal survival equipment (e.g., lifejackets) Recreation Amusement rides Climbing equipment Parachutes Scuba equipment Diving rebreather Dive computer (depending on use) Transport Railway Railway signalling and control systems Platform detection to control train doors Automatic train stop Automotive Airbag systems Braking systems Seat belts Power Steering systems Advanced driver-assistance systems Electronic throttle control Battery management system for hybrids and electric vehicles Electric park brake Shift by wire systems Drive by wire systems Park by wire Aviation Air traffic control systems Avionics, particularly fly-by-wire systems Radio navigation (Receiver Autonomous Integrity Monitoring) Engine control systems Aircrew life support systems Flight planning to determine fuel requirements for a flight Spaceflight Human spaceflight vehicles Rocket range launch safety systems Launch vehicle safety Crew rescue systems Crew transfer systems See also High integrity software Real-time computing (risk analysis software) References External links An Example of a Life-Critical System Safety-critical systems Virtual Library Explanation of Fail Operational and Fail Passive in Avionics NASA Technical Standards System Software Assurance and Software Safety Standard Computer systems Control engineering Engineering failures Formal methods Safety Risk analysis Process safety Safety engineering Software quality
Safety-critical system
[ "Chemistry", "Technology", "Engineering" ]
1,730
[ "Systems engineering", "Computer engineering", "Reliability engineering", "Safety engineering", "Technological failures", "Computer systems", "Engineering failures", "Software engineering", "Computer science", "Control engineering", "Civil engineering", "Process safety", "Chemical process en...
286,681
https://en.wikipedia.org/wiki/Flash%20point
The flash point of a material is the "lowest liquid temperature at which, under certain standardized conditions, a liquid gives off vapours in a quantity such as to be capable of forming an ignitable vapour/air mixture". The flash point is sometimes confused with the autoignition temperature, the temperature that causes spontaneous ignition. The fire point is the lowest temperature at which the vapors keep burning after the ignition source is removed. It is higher than the flash point, because at the flash point vapor may not be produced fast enough to sustain combustion. Neither flash point nor fire point depends directly on the ignition source temperature, but ignition source temperature is far higher than either the flash or fire point, and can increase the temperature of fuel above the usual ambient temperature to facilitate ignition. Fuels The flash point is a descriptive characteristic that is used to distinguish between flammable fuels, such as petrol (also known as gasoline), and combustible fuels, such as diesel. It is also used to characterize the fire hazards of fuels. Fuels which have a flash point less than are called flammable, whereas fuels having a flash point above that temperature are called combustible. Mechanism All liquids have a specific vapor pressure, which is a function of that liquid's temperature and is subject to Boyle–Mariotte law. As temperature increases, vapor pressure increases. As vapor pressure increases, the concentration of vapor of a flammable or combustible liquid in the air increases. Hence, temperature determines the concentration of vapor of the flammable liquid in the air. A certain concentration of a flammable or combustible vapor is necessary to sustain combustion in air, the lower flammable limit, and that concentration is specific to each flammable or combustible liquid. The flash point is the lowest temperature at which there will be enough flammable vapor to support combustion when an ignition source is applied. Measurement There are two basic types of flash point measurement: open cup and closed cup. In open cup devices, the sample is contained in an open cup which is heated and, at intervals, a flame brought over the surface. The measured flash point will actually vary with the height of the flame above the liquid surface and, at sufficient height, the measured flash point temperature will coincide with the fire point. The best-known example is the Cleveland open cup (COC). There are two types of closed cup testers: non-equilibrial, such as Pensky-Martens, where the vapours above the liquid are not in temperature equilibrium with the liquid, and equilibrial, such as Small Scale (commonly known as Setaflash), where the vapours are deemed to be in temperature equilibrium with the liquid. In both these types, the cups are sealed with a lid through which the ignition source can be introduced. Closed cup testers normally give lower values for the flash point than open cup (typically lower) and are a better approximation to the temperature at which the vapour pressure reaches the lower flammable limit. In addition to the Penskey-Martens flash point testers, other non-equilibrial testers include TAG and Abel, both of which are capable of cooling the sample below ambient for low flash point materials. The TAG flash point tester adheres to ASTM D56 and has no stirrer, while the Abel flash point testers adheres to IP 170 and ISO 13736 and has a stirring motor so the sample is stirred during testing. The flash point is an empirical measurement rather than a fundamental physical parameter. The measured value will vary with equipment and test protocol variations, including temperature ramp rate (in automated testers), time allowed for the sample to equilibrate, sample volume and whether the sample is stirred. Methods for determining the flash point of a liquid are specified in many standards. For example, testing by the Pensky-Martens closed cup method is detailed in ASTM D93, IP34, ISO 2719, DIN 51758, JIS K2265 and AFNOR M07-019. Determination of flash point by the Small Scale closed cup method is detailed in ASTM D3828 and D3278, EN ISO 3679 and 3680, and IP 523 and 524. CEN/TR 15138 Guide to Flash Point Testing and ISO TR 29662 Guidance for Flash Point Testing cover the key aspects of flash point testing. Examples Gasoline (petrol) is a fuel used in a spark-ignition engine. The fuel is mixed with air within its flammable limits and heated by compression and subject to Boyle's law above its flash point, then ignited by the spark plug. To ignite, the fuel must have a low flash point, but in order to avoid preignition caused by residual heat in a hot combustion chamber, the fuel must have a high autoignition temperature. Diesel fuel flash points vary between . Diesel is suitable for use in a compression-ignition engine. Air is compressed until it heats above the autoignition temperature of the fuel, which is then injected as a high-pressure spray, keeping the fuel-air mix within flammable limits. A diesel-fueled engine has no ignition source (such as the spark plugs in a gasoline engine), so diesel fuel can have a high flash point, but must have a low autoignition temperature. Jet fuel flash points also vary with the composition of the fuel. Both Jet A and Jet A-1 have flash points between , close to that of off-the-shelf kerosene. Yet both Jet B and JP-4 have flash points between . Standardization Flash points of substances are measured according to standard test methods described and defined in a 1938 publication by T.L. Ainsley of South Shields entitled "Sea Transport of Petroleum" (Capt. P. Jansen). The test methodology defines the apparatus required to carry out the measurement, key test parameters, the procedure for the operator or automated apparatus to follow, and the precision of the test method. Standard test methods are written and controlled by a number of national and international committees and organizations. The three main bodies are the CEN / ISO Joint Working Group on Flash Point (JWG-FP), ASTM D02.8B Flammability Section and the Energy Institute's TMS SC-B-4 Flammability Panel. See also Autoignition temperature Fire point Safety data sheet (SDS) References Combustion Threshold temperatures
Flash point
[ "Physics", "Chemistry" ]
1,332
[ "Physical phenomena", "Phase transitions", "Threshold temperatures", "Combustion" ]
18,712,595
https://en.wikipedia.org/wiki/Tollmien%E2%80%93Schlichting%20wave
In fluid dynamics, a Tollmien–Schlichting wave (often abbreviated T-S wave) is a streamwise unstable wave which arises in a bounded shear flow (such as boundary layer and channel flow). It is one of the more common methods by which a laminar bounded shear flow transitions to turbulence. The waves are initiated when some disturbance (sound, for example) interacts with leading edge roughness in a process known as receptivity. These waves are slowly amplified as they move downstream until they may eventually grow large enough that nonlinearities take over and the flow transitions to turbulence. These waves, originally discovered by Ludwig Prandtl, were further studied by two of his former students, Walter Tollmien and Hermann Schlichting after whom the phenomenon is named. Also, the T-S wave is defined as the most unstable eigen-mode of Orr–Sommerfeld equations. Physical mechanism In order for a boundary layer to be absolutely unstable (have an inviscid instability), it must satisfy Rayleigh's criterion; namely where represents the y-derivative and is the free stream velocity profile. In other words, the velocity profile must have an inflection point to be unstable. It is clear that in a typical boundary layer with a zero pressure gradient, the flow will be unconditionally stable; however, we know from experience this is not the case and the flow does transition. It is clear, then, that viscosity must be an important factor in the instability. It can be shown using energy methods that The rightmost term is a viscous dissipation term and is stabilizing. The left term, however, is the Reynolds stress term and is the primary production method for instability growth. In an inviscid flow, the and terms are orthogonal, so the term is zero, as one would expect. However, with the addition of viscosity, the two components are no longer orthogonal and the term becomes nonzero. In this regard, viscosity is destabilizing and is the reason for the formation of T-S waves. Transition phenomena Initial disturbance In a laminar boundary layer, if the initial disturbance spectrum is nearly infinitesimal and random (with no discrete frequency peaks), the initial instability will occur as two-dimensional Tollmien–Schlichting waves, travelling in the mean flow direction if compressibility is not important. However, three-dimensionality soon appears as the Tollmien–Schlichting waves rather quickly begin to show variations. There are known to be many paths from Tollmien–Schlichting waves to turbulence, and many of them are explained by the non-linear theories of flow instability. Final transition A shear layer develops viscous instability and forms Tollmien–Schlichting waves which grow, while still laminar, into finite amplitude (1 to 2 percent of the freestream velocity) three-dimensional fluctuations in velocity and pressure to develop three-dimensional unstable waves and hairpin eddies. From then on, the process is more a breakdown than a growth. The longitudinally stretched vortices begin a cascading breakdown into smaller units, until the relevant frequencies and wave numbers are approaching randomness. Then in this diffusively fluctuating state, intense local changes occur at random times and locations in the shear layer near the wall. At the locally intense fluctuations, turbulent 'spots' are formed that burst forth in the form of growing and spreading spots — the result of which is a fully turbulent state downstream. The simple harmonic transverse sound of Tollmien–Schlichting (T-S) waves Tollmien (1931) and Schlichting (1929) theorized that viscosity-induced grabbing and releasing of laminae created long-crested simple harmonic (SH) oscillations (vibrations) along a smooth flat boundary, at a flow rate approaching the onset of turbulence. These T-S waves would gradually increase in amplitude until they broke up into the vortices, noise and high resistance that characterize turbulent flow. Contemporary wind tunnels failed to show T-S waves. In 1943, Schubauer and Skramstad (S and S) created a wind tunnel that went to extremes to damp mechanical vibrations and sounds that might affect the airflow studies along a smooth flat plate. Using a vertical array of evenly spaced hot wire anemometers in the boundary layer (BL) airflow, they substantiated the existence of T-S oscillations by showing SH velocity fluctuations in the BL laminae. The T-S waves gradually increased in amplitude until a few random spikes of in-phase amplitude appeared, triggering focal vortices (turbulent spots), with noise. A further increase in flow rate resulted suddenly in many vortices, aerodynamic noise and a great increase in resistance to flow. An oscillation of a mass in a fluid creates a sound wave; SH oscillations of a mass of fluid, flowing in that same fluid along a boundary, must result in SH sound, reflected off the boundary, transversely into the fluid. S and S found foci of in-phase spiking amplitude in the T-S waves; these must create bursts of high amplitude sound, with high energy oscillation of fluid molecules transversely through the BL laminae. This has the potential to freeze laminar slip (laminar interlocking) in these spots, transferring the resistance to the boundary: this breaking at the boundary could rip out pieces of T-S long-crested waves which would tumble head-over-heels downstream in the boundary layer as the vortices of turbulent spots. With further increase in flow rate, there is an explosion into turbulence, with many random vortices and the noise of aerodynamic sound. Schubauer and Skramstad overlooked the significance of the co-generation of transverse SH sound by the T-S waves in transition and turbulence. However, John Tyndall (1867) in his transition-to-turbulence flow studies using flames, deduced that SH waves were created during transition by viscosity acting around the walls of a tube and these could be amplified by blending with similar SH sound waves (from a whistle), triggering turbulence at lower flow rates. Schubauer and Skramstad introduced SH sound into the boundary layer by creating SH fluttering vibrations of a BL ferromagnetic ribbon in their 1941 experiments, similarly triggering turbulence at lower flow rates. Tyndall’s contribution towards explaining the mystery of transition to turbulence 150 years ago is beginning to gain recognition. References Waves Fluid dynamics
Tollmien–Schlichting wave
[ "Physics", "Chemistry", "Engineering" ]
1,371
[ "Physical phenomena", "Chemical engineering", "Waves", "Motion (physics)", "Piping", "Fluid dynamics" ]
18,713,565
https://en.wikipedia.org/wiki/Low%20voltage%20ride%20through
In electrical power engineering, fault ride through (FRT), sometimes under-voltage ride through (UVRT), or low voltage ride through (LVRT), is the capability of electric generators to stay connected in short periods of lower electric network voltage (cf. voltage sag). It is needed at distribution level (wind parks, PV systems, distributed cogeneration, etc.) to prevent a short circuit at HV or EHV level from causing a widespread loss of generation. Similar requirements for critical loads such as computer systems and industrial processes are often handled through the use of an uninterruptible power supply (UPS) or capacitor bank to supply make-up power during these events. General concept Many generator designs use electric current flowing through windings to produce the magnetic field on which the motor or generator operates. This is in contrast to designs that use permanent magnets to generate this field instead. Such devices may have a minimum working voltage, below which the device does not work correctly, or does so at greatly reduced efficiency. Some will disconnect themselves from the circuit when these conditions apply. The effect is more pronounced in doubly-fed induction generators (DFIG), which have two sets of powered magnetic windings, than in squirrel-cage induction generators which have only one. Synchronous generators may slip and become unstable, if the voltage of the stator winding goes below a certain threshold. Risk of chain reaction In a grid containing many distributed generators subject to disconnection at under voltage, it is possible to cause a chain reaction that takes other generators offline as well. This can occur in the event of a voltage dip that causes one of the generators to disconnect from the grid. As voltage dips are often caused by too little generation for the load in a distribution grid, removing generation can cause the voltage to drop further. This may bring the voltage down enough to cause another generator to trip, lower the voltage even further, and may cause a cascading failure. Ride through systems Modern large-scale wind turbines, typically 1 MW and larger, are normally required to include systems that allow them to operate through such an event, and thereby “ride through” the voltage dip. Similar requirements are now becoming common on large solar power installations that likewise might cause instability in the event of a widespread disconnection of generating units. Depending on the application the device may, during and after the dip, be required to: disconnect and stay disconnected until manually ordered to reconnect disconnect temporarily from the grid, but reconnect and continue operation after the dip stay operational and not disconnect from the grid stay connected and support the grid with reactive power (defined as the reactive current of the positive sequence of the fundamental) Standards A variety of standards exist and generally vary across jurisdictions. Examples of the such grid codes are the German BDEW grid code and its supplements 2, 3, and 4 as well as the National Grid Code in UK. Testing For wind turbines, the FRT testing is described in the standard IEC 61400-21 (2nd edition August 2008). More detailed testing procedures are stated in the German guideline FGW TR3 (Rev. 22). Testing of devices with less than 16 Amp rated current is described in the EMC standard IEC 61000-4-11 and for higher current devices in IEC 61000-4-34. References See also Voltage dip Electric power
Low voltage ride through
[ "Physics", "Engineering" ]
707
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
18,715,724
https://en.wikipedia.org/wiki/Grushko%20theorem
In the mathematical subject of group theory, the Grushko theorem or the Grushko–Neumann theorem is a theorem stating that the rank (that is, the smallest cardinality of a generating set) of a free product of two groups is equal to the sum of the ranks of the two free factors. The theorem was first obtained in a 1940 article of Grushko and then, independently, in a 1943 article of Neumann. Statement of the theorem Let A and B be finitely generated groups and let A∗B be the free product of A and B. Then rank(A∗B) = rank(A) + rank(B). It is obvious that rank(A∗B) ≤ rank(A) + rank(B) since if X is a finite generating set of A and Y is a finite generating set of B then X∪Y is a generating set for A∗B and that |X ∪ Y| ≤ |X| + |Y|. The opposite inequality, rank(A∗B) ≥ rank(A) + rank(B), requires proof. Grushko, but not Neumann, proved a more precise version of Grushko's theorem in terms of Nielsen equivalence. It states that if M = (g1, g2, ..., gn) is an n-tuple of elements of G = A∗B such that M generates G, <g1, g2, ..., gn> = G, then M is Nielsen equivalent in G to an n-tuple of the form M = (a1, ..., ak, b1, ..., bn−k) where {a1, ..., ak}⊆A is a generating set for A and where {b1, ..., bn−k}⊆B is a generating set for B. In particular, rank(A) ≤ k, rank(B) ≤ n − k and rank(A) + rank(B) ≤ k + (n − k) = n. If one takes M to be the minimal generating tuple for G, that is, with n = rank(G), this implies that rank(A) + rank(B) ≤ rank(G). Since the opposite inequality, rank(G) ≤ rank(A) + rank(B), is obvious, it follows that rank(G)=rank(A) + rank(B), as required. History and generalizations After the original proofs of Grushko (1940) and Neumann(1943), there were many subsequent alternative proofs, simplifications and generalizations of Grushko's theorem. A close version of Grushko's original proof is given in the 1955 book of Kurosh. Like the original proofs, Lyndon's proof (1965) relied on length-functions considerations but with substantial simplifications. A 1965 paper of Stallings gave a greatly simplified topological proof of Grushko's theorem. A 1970 paper of Zieschang gave a Nielsen equivalence version of Grushko's theorem (stated above) and provided some generalizations of Grushko's theorem for amalgamated free products. Scott (1974) gave another topological proof of Grushko's theorem, inspired by the methods of 3-manifold topology Imrich (1984) gave a version of Grushko's theorem for free products with infinitely many factors. A 1976 paper of Chiswell gave a relatively straightforward proof of Grushko's theorem, modelled on Stallings' 1965 proof, that used the techniques of Bass–Serre theory. The argument directly inspired the machinery of foldings for group actions on trees and for graphs of groups and Dicks' even more straightforward proof of Grushko's theorem (see, for example, John R. Stallings. "Foldings of G-trees." Arboreal group theory (Berkeley, California, 1988), pp. 355–368, Mathematical Sciences Research Institute Publications, 19. Springer, New York, 1991; ). Grushko's theorem is, in a sense, a starting point in Dunwoody's theory of accessibility for finitely generated and finitely presented groups. Since the ranks of the free factors are smaller than the rank of a free product, Grushko's theorem implies that the process of iterated splitting of a finitely generated group G as a free product must terminate in a finite number of steps (more precisely, in at most rank(G) steps). There is a natural similar question for iterating splittings of finitely generated groups over finite subgroups. Dunwoody proved that such a process must always terminate if a group G is finitely presented but may go on forever if G is finitely generated but not finitely presented. An algebraic proof of a substantial generalization of Grushko's theorem using the machinery of groupoids was given by Higgins (1966). Higgins' theorem starts with groups G and B with free decompositions G = ∗i Gi, B = ∗i Bi and f : G → B a morphism such that f(Gi) = Bi for all i. Let H be a subgroup of G such that f(H) = B. Then H has a decomposition H = ∗i Hi such that f(Hi) = Bi for all i. Full details of the proof and applications may also be found in .Higgins, Philip J., Notes on categories and groupoids. Van Nostrand Rienhold Mathematical Studies, No. 32. Van Nostrand Reinhold Co., London-New York-Melbourne, 1971. Reprinted as Theory and Applications of Categories Reprint No 7, 2005. Grushko decomposition theorem A useful consequence of the original Grushko theorem is the so-called Grushko decomposition theorem. It asserts that any nontrivial finitely generated group G can be decomposed as a free product G = A1∗A2∗...∗Ar∗Fs, where s ≥ 0, r ≥ 0, where each of the groups Ai is nontrivial, freely indecomposable (that is, it cannot be decomposed as a free product) and not infinite cyclic, and where Fs is a free group of rank s; moreover, for a given G, the groups A1, ..., Ar are unique up to a permutation of their conjugacy classes in G (and, in particular, the sequence of isomorphism types of these groups is unique up to a permutation) and the numbers s and r are unique as well. More precisely, if G = B1∗...∗Bk∗Ft is another such decomposition then k = r, s = t, and there exists a permutation σ∈Sr such that for each i=1,...,r the subgroups Ai and Bσ(i) are conjugate in G. The existence of the above decomposition, called the Grushko decomposition of G, is an immediate corollary of the original Grushko theorem, while the uniqueness statement requires additional arguments (see, for example). Algorithmically computing the Grushko decomposition for specific classes of groups is a difficult problem which primarily requires being able to determine if a given group is freely decomposable. Positive results are available for some classes of groups such as torsion-free word-hyperbolic groups, certain classes of relatively hyperbolic groups, fundamental groups of finite graphs of finitely generated free groups and others. Grushko decomposition theorem is a group-theoretic analog of the Kneser prime decomposition theorem for 3-manifolds which says that a closed 3-manifold can be uniquely decomposed as a connected sum of irreducible 3-manifolds. Sketch of the proof using Bass–Serre theory The following is a sketch of the proof of Grushko's theorem based on the use of foldings techniques for groups acting on trees (see for complete proofs using this argument). Let S={g1,....,gn} be a finite generating set for G=A∗B of size |S|=n=rank(G). Realize G as the fundamental group of a graph of groups Y which is a single non-loop edge with vertex groups A and B and with the trivial edge group. Let be the Bass–Serre covering tree for Y. Let F=F(x1,....,xn) be the free group with free basis x1,....,xn and let φ0:F → G be the homomorphism such that φ0(xi)=gi for i=1,...,n. Realize F as the fundamental group of a graph Z0 which is the wedge of n circles that correspond to the elements x1,....,xn. We also think of Z0 as a graph of groups with the underlying graph Z0 and the trivial vertex and edge groups. Then the universal cover of Z0 and the Bass–Serre covering tree for Z0 coincide. Consider a φ0-equivariant map so that it sends vertices to vertices and edges to edge-paths. This map is non-injective and, since both the source and the target of the map are trees, this map "folds" some edge-pairs in the source. The graph of groups Z0 serves as an initial approximation for Y. We now start performing a sequence of "folding moves" on Z0 (and on its Bass-Serre covering tree) to construct a sequence of graphs of groups Z0, Z1, Z2, ...., that form better and better approximations for Y. Each of the graphs of groups Zj has trivial edge groups and comes with the following additional structure: for each nontrivial vertex group of it there assigned a finite generating set of that vertex group. The complexity c(Zj) of Zj is the sum of the sizes of the generating sets of its vertex groups and the rank of the free group π1(Zj). For the initial approximation graph we have c(Z0)=n. The folding moves that take Zj to Zj+1 can be of one of two types: folds that identify two edges of the underlying graph with a common initial vertex but distinct end-vertices into a single edge; when such a fold is performed, the generating sets of the vertex groups and the terminal edges are "joined" together into a generating set of the new vertex group; the rank of the fundamental group of the underlying graph does not change under such a move. folds that identify two edges, that already had common initial vertices and common terminal vertices, into a single edge; such a move decreases the rank of the fundamental group of the underlying graph by 1 and an element that corresponded to the loop in the graph that is being collapsed is "added" to the generating set of one of the vertex groups. One sees that the folding moves do not increase complexity but they do decrease the number of edges in Zj. Therefore, the folding process must terminate in a finite number of steps with a graph of groups Zk that cannot be folded any more. It follows from the basic Bass–Serre theory considerations that Zk must in fact be equal to the edge of groups Y and that Zk comes equipped with finite generating sets for the vertex groups A and B. The sum of the sizes of these generating sets is the complexity of Zk which is therefore less than or equal to c(Z0)=n. This implies that the sum of the ranks of the vertex groups A and B is at most n, that is rank(A)+rank(B)≤rank(G), as required. Sketch of Stalling's proof Stallings' proof of Grushko Theorem follows from the following lemma. Lemma Let F be a finitely generated free group, with n generators. Let G1 and G2 be two finitely presented groups. Suppose there exists a surjective homomorphism . Then there exists two subgroups F1 and F2 of F with and , such that Proof: We give the proof assuming that F has no generator which is mapped to the identity of , for if there are such generators, they may be added to any of or . The following general results are used in the proof. 1. There is a one or two dimensional CW complex, Z with fundamental group F. By Van Kampen theorem, the wedge of n circles is one such space. 2. There exists a two complex where is a point on a one cell of X such that X1 and X2 are two complexes with fundamental groups G1 and G2 respectively. Note that by the Van Kampen theorem, this implies that the fundamental group of X is . 3. There exists a map such that the induced map on the fundamental groups is same as For the sake of convenience, let us denote and . Since no generator of F maps to identity, the set has no loops, for if it does, these will correspond to circles of Z which map to , which in turn correspond to generators of F which go to the identity. So, the components of are contractible. In the case where has only one component, by Van Kampen's theorem, we are done, as in that case, :. The general proof follows by reducing Z to a space homotopically equivalent to it, but with fewer components in , and thus by induction on the components of . Such a reduction of Z is done by attaching discs along binding ties. We call a map a binding tie if it satisfies the following properties 1. It is monochromatic i.e. or 2. It is a tie i.e. and lie in different components of . 3. It is null i.e. is null homotopic in X'''. Let us assume that such a binding tie exists. Let be the binding tie. Consider the map given by . This map is a homeomorphism onto its image. Define the space as where : Note that the space Z' deformation retracts to Z We first extend f to a function as Since the is null homotopic, further extends to the interior of the disc, and therefore, to . Let i = 1,2. As and lay in different components of , has one less component than . Construction of binding tie The binding tie is constructed in two steps. Step 1: Constructing a null tie: Consider a map with and in different components of . Since is surjective, there exits a loop based at γ'(1) such that and are homotopically equivalent in X. If we define a curve as for all , then is a null tie. Step 2: Making the null tie monochromatic: The tie may be written as where each is a curve in or such that if is in , then is in and vice versa. This also implies that is a loop based at p in X. So, Hence, for some j. If this is a tie, then we have a monochromatic, null tie. If is not a tie, then the end points of are in the same component of . In this case, we replace by a path in , say . This path may be appended to and we get a new null tie , where . Thus, by induction on m, we prove the existence of a binding tie. Proof of Grushko theorem Suppose that is generated by . Let be the free group with -generators, viz. . Consider the homomorphism given by , where . By the lemma, there exists free groups and with such that and . Therefore, and . Therefore, See also Bass–Serre theory Generating set of a group Notes Geometric group theory Geometric topology Theorems in group theory
Grushko theorem
[ "Physics", "Mathematics" ]
3,273
[ "Geometric group theory", "Group actions", "Geometric topology", "Topology", "Symmetry" ]
23,983,793
https://en.wikipedia.org/wiki/Tetraploid%20complementation%20assay
The tetraploid complementation assay is a technique in biology in which cells of two mammalian embryos are combined to form a new embryo. It is used to construct genetically modified organisms, to study the consequences of certain mutations on embryonal development, and in the study of pluripotent stem cells. Procedure Normal mammalian somatic cells are diploid: each chromosome (and thus every gene) is present in duplicate (excluding genes from X chromosome absent in Y chromosome). The assay starts with producing a tetraploid cell in which every chromosome exists fourfold. This is done by taking an embryo at the two-cell stage and fusing the two cells by applying an electrical current. The resulting tetraploid cell will continue to divide, and all daughter cells will also be tetraploid. Such a tetraploid embryo can develop normally to the blastocyst stage and will implant in the wall of the uterus. The tetraploid cells can form the extra-embryonic tissue (placenta, etc.); however, a proper fetus will rarely develop. In the tetraploid complementation assay, one now combines such a tetraploid embryo (either at the morula or blastocyst stage) with normal diploid embryonic stem cells (ES) from a different organism. The embryo will then develop normally; the fetus is exclusively derived from the ES cell, while the extra-embryonic tissues are exclusively derived from the tetraploid cells. Applications Foreign genes or mutations can be introduced into ES cells rather easily, and these ES cells can then be grown into whole animals using the tetraploid complementation assay. By introducing targeted mutations into the tetraploid cells and/or into the ES cells, one can study which genes are important for fetal development and which ones are important for development of the extra-embryonic tissues. The tetraploid complementation assay is also used to test whether induced pluripotent stem cells (stem cells artificially produced from differentiated cells, e.g. from skin cells) are as competent as normal embryonal stem cells. If a viable animal can be produced from an induced pluripotent stem cell using the tetraploid complementation assay, then the induced stem cells are deemed equivalent to embryonal stem cells. This was first shown in 2009. References Stem cells Genetic engineering
Tetraploid complementation assay
[ "Chemistry", "Engineering", "Biology" ]
495
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
23,984,205
https://en.wikipedia.org/wiki/Strangeness%20and%20quark%E2%80%93gluon%20plasma
In high-energy nuclear physics, strangeness production in relativistic heavy-ion collisions is a signature and diagnostic tool of quark–gluon plasma (QGP) formation and properties. Unlike up and down quarks, from which everyday matter is made, heavier quark flavors such as strange and charm typically approach chemical equilibrium in a dynamic evolution process. QGP (also known as quark matter) is an interacting localized assembly of quarks and gluons at thermal (kinetic) and not necessarily chemical (abundance) equilibrium. The word plasma signals that color charged particles (quarks and/or gluons) are able to move in the volume occupied by the plasma. The abundance of strange quarks is formed in pair-production processes in collisions between constituents of the plasma, creating the chemical abundance equilibrium. The dominant mechanism of production involves gluons only present when matter has become a quark–gluon plasma. When quark–gluon plasma disassembles into hadrons in a breakup process, the high availability of strange antiquarks helps to produce antimatter containing multiple strange quarks, which is otherwise rarely made. Similar considerations are at present made for the heavier charm flavor, which is made at the beginning of the collision process in the first interactions and is only abundant in the high-energy environments of CERN's Large Hadron Collider. Quark–gluon plasma in the early universe and in the laboratory Free quarks probably existed in the extreme conditions of the very early universe until about 30 microseconds after the Big Bang, in a very hot gas of free quarks, antiquarks and gluons. This gas is called quark–gluon plasma (QGP), since the quark-interaction charge (color charge) is mobile and quarks and gluons move around. This is possible because at a high temperature the early universe is in a different vacuum state, in which normal matter cannot exist but quarks and gluons can; they are deconfined (able to exist independently as separate unbound particles). In order to recreate this deconfined phase of matter in the laboratory it is necessary to exceed a minimum temperature, or its equivalent, a minimum energy density. Scientists achieve this using particle collisions at extremely high speeds, where the energy released in the collision can raise the subatomic particles' energies to an exceedingly high level, sufficient for them to briefly form a tiny amount of quark–gluon plasma that can be studied in laboratory experiments for little more than the time light needs to cross the QGP fireball, thus about 10−22 s. After this brief time the hot drop of quark plasma evaporates in a process called hadronization. This is so since practically all QGP components flow out at relativistic speed. In this way, it is possible to study conditions akin to those in the early Universe at the age of 10–40 microseconds. Discovery of this new QGP state of matter has been announced both at CERN and at Brookhaven National Laboratory (BNL). Preparatory work, allowing for these discoveries, was carried out at the Joint Institute for Nuclear Research (JINR) and Lawrence Berkeley National Laboratory (LBNL) at the Bevalac. New experimental facilities, FAIR at the GSI Helmholtz Centre for Heavy Ion Research (GSI) and NICA at JINR, are under construction. Strangeness as a signature of QGP was first explored in 1983. Comprehensive experimental evidence about its properties is being assembled. Recent work by the ALICE collaboration at CERN has opened a new path to study of QGP and strangeness production in very high energy pp collisions. Strangeness in quark–gluon plasma The diagnosis and the study of the properties of quark–gluon plasma can be undertaken using quarks not present in matter seen around us. The experimental and theoretical work relies on the idea of strangeness enhancement. This was the first observable of quark–gluon plasma proposed in 1980 by Johann Rafelski and Rolf Hagedorn. Unlike the up and down quarks, strange quarks are not brought into the reaction by the colliding nuclei. Therefore, any strange quarks or antiquarks observed in experiments have been "freshly" made from the kinetic energy of colliding nuclei, with gluons being the catalyst. Conveniently, the mass of strange quarks and antiquarks is equivalent to the temperature or energy at which protons, neutrons and other hadrons dissolve into quarks. This means that the abundance of strange quarks is sensitive to the conditions, structure and dynamics of the deconfined matter phase, and if their number is large it can be assumed that deconfinement conditions were reached. An even stronger signature of strangeness enhancement is the highly enhanced production of strange antibaryons. An early comprehensive review of strangeness as a signature of QGP was presented by Koch, Müller and Rafelski, which was recently updated. The abundance of produced strange anti-baryons, and in particular anti-omega , allowed to distinguish fully deconfined large QGP domain from transient collective quark models such as the color rope model proposed by Biró, Nielsen and Knoll. The relative abundance of resolves questions raised by the canonical model of strangeness enhancement. Equilibrium of strangeness in quark–gluon plasma One cannot assume that under all conditions the yield of strange quarks is in thermal equilibrium. In general, the quark-flavor composition of the plasma varies during its ultra short lifetime as new flavors of quarks such as strangeness are cooked up inside. The up and down quarks from which normal matter is made are easily produced as quark–antiquark pairs in the hot fireball because they have small masses. On the other hand, the next lightest quark flavor—strange quarks—will reach its high quark–gluon plasma thermal abundance provided that there is enough time and that the temperature is high enough. This work elaborated the kinetic theory of strangness production proposed by T. Biro and J. Zimanyi who demonstrated  that strange quarks could not be produced fast enough alone by quark-antiquark reactions. A new mechanism operational alone in QGP was proposed. Gluon fusion into strangeness Yield equilibration of strangeness yield in QGP is only possible due to a new process, gluon fusion, as shown by Rafelski and Müller. The top section of the Feynman diagrams figure, shows the new gluon fusion processes: gluons are the wavy lines; strange quarks are the solid lines; time runs from left to right. The bottom section is the process where the heavier quark pair arises from the lighter pair of quarks shown as dashed lines. The gluon fusion process occurs almost ten times faster than the quark-based strangeness process, and allows achievement of the high thermal yield where the quark based process would fail to do so during the duration of the "micro-bang". The ratio of newly produced pairs with the normalized light quark pairs —the  Wroblewski ratio—is considered a measure of efficacy of strangeness production. This ratio more than doubles in heavy ion collisions, providing a model independent confirmation of a new mechanism of strangeness production operating in collisions that are producing QGP. Regarding charm and bottom flavour: the gluon collisions here are occurring within the thermal matter phase and thus are different from the high energy processes that can ensue in the early stages of the collisions when the nuclei crash into each other. The heavier, charm and bottom quarks are produced there dominantly. The study in relativistic nuclear (heavy ion) collisions of charmed and soon also bottom hadronic particle production—beside strangeness—will provide complementary and important confirmation of the mechanisms of formation, evolution and hadronization of quark–gluon plasma in the laboratory. Strangeness (and charm) hadronization These newly cooked strange quarks find their way into a multitude of different final particles that emerge as the hot quark–gluon plasma fireball breaks up, see the scheme of different processes in figure. Given the ready supply of antiquarks in the "fireball", one also finds a multitude of antimatter particles containing more than one strange quark. On the other hand, in a system involving a cascade of nucleon–nucleon collisions, multi-strange antimatter are produced less frequently considering that several relatively improbable events must occur in the same collision process. For this reason one expects that the yield of multi-strange antimatter particles produced in the presence of quark matter is enhanced compared to conventional series of reactions. Strange quarks also bind with the heavier charm and bottom quarks which also like to bind with each other. Thus, in the presence of a large number of these quarks, quite unusually abundant exotic particles can be produced; some of which have never been observed before. This should be the case in the forthcoming exploration at the new Large Hadron Collider at CERN of the particles that have charm and strange quarks, and even bottom quarks, as components. Strange hadron decay and observation Strange quarks are naturally radioactive and decay by weak interactions into lighter quarks on a timescale that is extremely long compared with the nuclear-collision times. This makes it relatively easy to detect strange particles through the tracks left by their decay products. Consider as an example the decay of a negatively charged baryon (green in figure, dss), into a negative pion (d) and a neutral (uds) baryon. Subsequently, the decays into a proton and another negative pion. In general this is the signature of the decay of a . Although the negative (sss) baryon has a similar final state decay topology, it can be clearly distinguished from the because its decay products are different. Measurement of abundant formation of (uss/dss), (sss) and especially their antiparticles is an important cornerstone of the claim that quark–gluon plasma has been formed. This abundant formation is often presented in comparison with the scaled expectation from normal proton–proton collisions; however, such a comparison is not a necessary step in view of the large absolute yields which defy conventional model expectations. The overall yield of strangeness is also larger than expected if the new form of matter has been achieved. However, considering that the light quarks are also produced in gluon fusion processes, one expects increased production of all hadrons. The study of the relative yields of strange and non strange particles provides information about the competition of these processes and thus the reaction mechanism of particle production. Systematics of strange matter and antimatter creation The work of Koch, Muller, Rafelski predicts that in a quark–gluon plasma hadronization process the enhancement for each particle species increases with the strangeness content of the particle. The enhancements for particles carrying one, two and three strange or antistrange quarks were measured and this effect was demonstrated by the CERN WA97 experiment in time for the CERN announcement in 2000 of a possible quark–gluon plasma formation in its experiments. These results were elaborated by the successor collaboration NA57 as shown in the enhancement of antibaryon figure. The gradual rise of the enhancement as a function of the variable representing the amount of nuclear matter participating in the collisions, and thus as a function of the geometric centrality of nuclear collision strongly favors the quark–gluon plasma source over normal matter reactions. A similar enhancement was obtained by the STAR experiment at the RHIC. Here results obtained when two colliding systems at 100 A GeV in each beam are considered: in red the heavier gold–gold collisions and in blue the smaller copper–copper collisions. The energy at RHIC is 11 times greater in the CM frame of reference compared to the earlier CERN work. The important result is that enhancement observed by STAR also increases with the number of participating nucleons. We further note that for the most peripheral events at the smallest number of participants, copper and gold systems show, at the same number of participants, the same enhancement as expected. Another remarkable feature of these results, comparing CERN and STAR, is that the enhancement is of similar magnitude for the vastly different collision energies available in the reaction. This near energy independence of the enhancement also agrees with the quark–gluon plasma approach regarding the mechanism of production of these particles and confirms that a quark–gluon plasma is created over a wide range of collision energies, very probably once a minimal energy threshold is exceeded. ALICE: Resolution of remaining questions about strangeness as signature of quark–gluon plasma The very high precision of (strange) particle spectra and large transverse momentum coverage reported by the ALICE Collaboration at the Large Hadron Collider (LHC) allows in-depth exploration of lingering challenges, which always accompany new physics, and here in particular the questions surrounding strangeness signature. Among the most discussed challenges has been the question if the abundance of particles produced is enhanced or if the comparison base line is suppressed. Suppression is expected when an otherwise absent quantum number, such as strangeness, is rarely produced. This situation was recognized by Hagedorn in his early analysis of particle production and solved by Rafelski and Danos. In that work it was shown that even if just a few new pairs of strange particles were produced the effect disappears. However, the matter was revived by Hamieh et al. who argued that is possible that small sub-volumes in QGP are of relevance. This argument can be resolved by exploring specific sensitive experimental signatures for example the ratio of double strange particles of different type, such yield of () compared to (). The ALICE experiment obtained this ratio for several collision systems in a wide range of hadronization volumes as described by the total produced particle multiplicy. The results show that this ratio assumes the expected value for a large range volumes (two orders of magnitude). At small particle volume or multiplicity, the curve shows the expected reduction: The () must be smaller compared to () as the number of produced strange pairs decreases and thus it easier to make () compared to () that requires two pairs minimum to be made. However, we also see an increase at very high volume—this is an effect at the level of one to two standard deviations. Similar results were already recognized before by Petran et al. Another highly praised ALICE result is the observation of same strangeness enhancement, not only on AA (nucleus–nucleus) but also in pA (proton–nucleus) and pp (proton–proton) collisions when the particle production yields are presented as a function of the multiplicity, which, as noted, corresponds to the available hadronization volume. ALICE results display a smooth volume dependence of total yield of all studied particles as function of volume, there is no additional "canonical" suppression. This is so since the yield of strange pairs in QGP is sufficiently high and tracks well the expected abundance increase as the volume and lifespan of QGP increases. This increase is incompatible with the hypothesis that for all reaction volumes QGP is always in chemical (yield) equilibrium of strangeness. Instead, this confirms the theoretical kinetic model proposed by Rafelski and Müller. The production of QGP in pp collisions was not expected by all, but should not be a surprise. The onset of deconfinement is naturally a function of both energy and collision system size. The fact that at extreme LHC energies we cross this boundary also in experiments with the smallest elementary collision systems, such as pp, confirms the unexpected strength of the processes leading to QGP formation. Onset of deconfinement in pp and other "small" system collisions remains an active research topic. Beyond strangeness the great advantage offered by LHC energy range is the abundant production of charm and bottom flavor. When QGP is formed, these quarks are embedded in a high density of strangeness present. This should lead to copious production of exotic heavy particles, for example . Other heavy flavor particles, some which have not even been discovered at this time, are also likely to appear. S–S and S–W collisions at SPS-CERN with projectile energy 200 GeV per nucleon on fixed target Looking back to the beginning of the CERN heavy ion program one sees de facto announcements of quark–gluon plasma discoveries. The CERN-NA35 and CERN-WA85 experimental collaborations announced formation in heavy ion reactions in May 1990 at the Quark Matter Conference, Menton, France. The data indicates a significant enhancement of the production of this antimatter particle comprising one antistrange quark as well as antiup and antidown quarks. All three constituents of the particle are newly produced in the reaction. The WA85 results were in agreement with theoretical predictions. In the published report, WA85 interpreted their results as QGP. NA35 had large systematic errors in its data, which were improved in the following years. Moreover, the collaboration needed to evaluate the pp-background. These results are presented as function of the variable called rapidity which characterizes the speed of the source. The peak of emission indicates that the additionally formed antimatter particles do not originate from the colliding nuclei themselves, but from a source that moves at a speed corresponding to one-half of the rapidity of the incident nucleus that is a common center of momentum frame of reference source formed when both nuclei collide, that is, the hot quark–gluon plasma fireball. Horn in K → π ratio and the onset of deconfinement One of the most interesting questions is if there is a threshold in reaction energy and/or volume size which needs to be exceeded in order to form a domain in which quarks can move freely. It is natural to expect that if such a threshold exists the particle yields/ratios we have shown above should indicate that. One of the most accessible signatures would be the relative Kaon yield ratio. A possible structure has been predicted, and indeed, an unexpected structure is seen in the ratio of particles comprising the positive kaon K (comprising anti s-quarks and up-quark) and positive pion particles, seen in the figure (solid symbols). The rise and fall (square symbols) of the ratio has been reported by the CERN NA49. The reason the negative kaon particles do not show this "horn" feature is that the s-quarks prefer to hadronize bound in the Lambda particle, where the counterpart structure is observed. Data point from BNL–RHIC–STAR (red stars) in figure agree with the CERN data. In view of these results the objective of ongoing NA61/SHINE experiment at CERN SPS and the proposed low energy run at BNL RHIC where in particular the STAR detector can search for the onset of production of quark–gluon plasma as a function of energy in the domain where the horn maximum is seen, in order to improve the understanding of these results, and to record the behavior of other related quark–gluon plasma observables. Outlook The strangeness production and its diagnostic potential as a signature of quark–gluon plasma has been discussed for nearly 30 years. The theoretical work in this field today focuses on the interpretation of the overall particle production data and the derivation of the resulting properties of the bulk of quark–gluon plasma at the time of breakup. The global description of all produced particles can be attempted based on the picture of hadronizing hot drop of quark–gluon plasma or, alternatively, on the picture of confined and equilibrated hadron matter. In both cases one describes the data within the statistical thermal production model, but considerable differences in detail differentiate the nature of the source of these particles. The experimental groups working in the field also like to develop their own data analysis models and the outside observer sees many different analysis results. There are as many as 10–15 different particles species that follow the pattern predicted for the QGP as function of reaction energy, reaction centrality, and strangeness content. At yet higher LHC energies saturation of strangeness yield and binding to heavy flavor open new experimental opportunities. Conferences and meetings Scientists studying strangeness as signature of quark gluon plasma present and discuss their results at specialized meetings. Well established is the series International Conference on Strangeness in Quark Matter, first organized in Tucson, Arizona, in 1995. The latest edition, 10–15 June 2019, of the conference was held in Bari, Italy, attracting about 300 participants. A more general venue is the Quark Matter conference, which last time took place from 3–9 September 2023 in Houston, USA, attracting about 800 participants. Further reading Brief history of the search for critical structures in heavy-ion collisions, Marek Gazdzicki, Mark Gorenstein, Peter Seyboth, 2020. Discovery of quark–gluon plasma: strangeness diaries, Johann Rafelski, 2020. Four heavy-ion experiments at the CERN-SPS: A trip down memory lane, Emanuele Quercigh, 2012. On the history of multi-particle production in high energy collisions, Marek Gazdzicki, 2012. Strangeness and the quark–gluon plasma: thirty years of discovery, Berndt Müller, 2012. See also Quark–gluon plasma Quark matter Hadronization Strangelet Strange particle References {{|}} Quark matter Production Exotic matter Nuclear physics Phases of matter Quantum chromodynamics
Strangeness and quark–gluon plasma
[ "Physics", "Chemistry" ]
4,523
[ "Quark matter", "Phases of matter", "Astrophysics", "Exotic matter", "Nuclear physics", "Matter" ]
23,984,989
https://en.wikipedia.org/wiki/Energy%20Manufacturing%20Co.%20Inc
Energy Manufacturing Co., Inc. is an American manufacturing company based in Monticello, Iowa. Established in 1944, the company produces a variety of hydraulic cylinders, hydraulic pumps, valves, and power systems. History In the early 1940s B.J. Pasker ran a blacksmith shop in New Vienna, Iowa. In this shop his son, Jerry, produced farm wagons made from discarded automobile spindles and rims. During this period, Pasker also developed a hydraulically powered front loader which mounted to farm tractors. In 1944 Jerry Pasker outgrew the blacksmith shop and sought a larger facility for his operations. Energy in Monticello Iowa Jerry Pasker moved to South Cedar Street in Monticello, Iowa and was introduced to Harold Sovereign, who sold John Deere tractors and equipment. Pasker and Sovereign formed a partnership known as Industrious Farmer Equipment Company, and moved the business to the vacant second floor of Sovereign's dealership on South Cedar Street. In 1946 the business again outgrew its facility. To accommodate the expansion, Pasker purchased the property of an auto dealership on Main Street. During that time the company manufactured hydraulic components, wagon hoists, truck hoists, valves and hydraulic cylinders. In 1948 the company's name was changed to Energy Farm Equipment Company. In 1962 the business incorporated to become Energy Manufacturing Company, Inc., by which it is still known. Jerry Pasker was killed in a 22 July 1964 airplane crash in Winnipeg, Manitoba, Canada; the company presidency then passed to LaVon Pasker. Energy Manufacturing after Jerry Pasker In 1976 Energy completed construction of a new plant on in the Monticello Industrial park. In 1985 Energy Manufacturing Company was sold to CGF Industries of Topeka, Kansas. CGF also purchased an Omaha Nebraska company called "Williams Machine and Tool". In 1997 Energy was purchased by Lincolnshire Partners and in 1999 Energy was purchased by Textron, Inc. Textron ran the company for 5 years until Energy was acquired by an investment group. On November 15, 2005 Energy added to the facility office space for administrative and manufacturing support. Energy 2005 - 2013 Energy designs and manufactures custom welded hydraulic cylinders. It also designs and manufactures hydraulic valves, pumps, powerpacks and power systems. Energy's cylinders are used in construction, road machinery, forestry, man lift and hoist, industrial bailer, waste compacting, and agricultural industries. Energy manufactures a wide variety of hydraulic cylinders; welded, tie-rod, ram-type, rephasing, telescopic, and position-sensing. Energy has designed and manufactured hydraulic cylinders with bores from less than one inch (2.5 cm), up to 11 inches (28 cm). Cylinders have been manufactured with strokes up to 15 feet (4.5 cm). Energy has designed cylinders with working pressures as high as 10,000 psig (690 bar). Energy Manufacturing sold On May 30, 2013 Ligon Industries LLC acquired Energy Manufacturing Co. Inc.,. Ligon Industries, LLC was founded in 1999, and is located in Birmingham, Alabama. In addition to Energy Manufacturing, Ligon holds 13 other manufacturing companies, seven of which are in the fluid power industry. Ligon is the largest independent manufacturer of hydraulic cylinders in North America. See also Hydraulic cylinder Telescopic cylinder Tie Rod Cylinder References External links Energy Manufacturing Williams Machine and Tool Energy Manufacturing on NFPA list Hydraulics Pneumatics Article about Energy Ligon Industries Fluid dynamics Hydraulics Pumps
Energy Manufacturing Co. Inc
[ "Physics", "Chemistry", "Engineering" ]
712
[ "Pumps", "Turbomachinery", "Chemical engineering", "Physical systems", "Hydraulics", "Piping", "Fluid dynamics" ]
23,988,436
https://en.wikipedia.org/wiki/Radiative%20equilibrium
Radiative equilibrium is the condition where the total thermal radiation leaving an object is equal to the total thermal radiation entering it. It is one of the several requirements for thermodynamic equilibrium, but it can occur in the absence of thermodynamic equilibrium. There are various types of radiative equilibrium, which is itself a kind of dynamic equilibrium. Definitions Equilibrium, in general, is a state in which opposing forces are balanced, and hence a system does not change in time. Radiative equilibrium is the specific case of thermal equilibrium, for the case in which the exchange of heat is done by radiative heat transfer. There are several types of radiative equilibrium. Prevost's definitions An important early contribution was made by Pierre Prevost in 1791. Prevost considered that what is nowadays called the photon gas or electromagnetic radiation was a fluid that he called "free heat". Prevost proposed that free radiant heat is a very rare fluid, rays of which, like light rays, pass through each other without detectable disturbance of their passage. Prevost's theory of exchanges stated that each body radiates to, and receives radiation from, other bodies. The radiation from each body is emitted regardless of the presence or absence of other bodies. Prevost in 1791 offered the following definitions (translated): Absolute equilibrium of free heat is the state of this fluid in a portion of space which receives as much of it as it lets escape. Relative equilibrium of free heat is the state of this fluid in two portions of space which receive from each other equal quantities of heat, and which moreover are in absolute equilibrium, or experience precisely equal changes. Prevost went on to comment that "The heat of several portions of space at the same temperature, and next to one another, is at the same time in the two species of equilibrium." Pointwise radiative equilibrium Following Max Planck (1914), a radiative field is often described in terms of specific radiative intensity, which is a function of each geometrical point in a space region, at an instant of time. This is slightly different from Prevost's mode of definition, which was for regions of space. It is also slightly conceptually different from Prevost's definition: Prevost thought in terms of bound and free heat while today we think in terms of heat in kinetic and other dynamic energy of molecules, that is to say heat in matter, and the thermal photon gas. A detailed definition is given by R. M. Goody and Y. L. Yung (1989). They think of the interconversion between thermal radiation and heat in matter. From the specific radiative intensity they derive , the monochromatic vector flux density of radiation at each point in a region of space, which is equal to the time averaged monochromatic Poynting vector at that point (D. Mihalas 1978 on pages 9–11). They define the monochromatic volume-specific rate of gain of heat by matter from radiation as the negative of the divergence of the monochromatic flux density vector; it is a scalar function of the position of the point: . They define (pointwise) monochromatic radiative equilibrium by at every point of the region that is in radiative equilibrium. They define (pointwise) radiative equilibrium by at every point of the region that is in radiative equilibrium. This means that, at every point of the region of space that is in (pointwise) radiative equilibrium, the total, for all frequencies of radiation, interconversion of energy between thermal radiation and energy content in matter is nil(zero). Pointwise radiative equilibrium is closely related to Prevost's absolute radiative equilibrium. D. Mihalas and B. Weibel-Mihalas (1984) emphasise that this definition applies to a static medium, in which the matter is not moving. They also consider moving media. Approximate pointwise radiative equilibrium Karl Schwarzschild in 1906 considered a system in which convection and radiation both operated but radiation was so much more efficient than convection that convection could be, as an approximation, neglected, and radiation could be considered predominant. This applies when the temperature is very high, as for example in a star, but not in a planet's atmosphere. Subrahmanyan Chandrasekhar (1950, page 290) writes of a model of a stellar atmosphere in which "there are no mechanisms, other than radiation, for transporting heat within the atmosphere ... [and] there are no sources of heat in the surrounding" This is hardly different from Schwarzschild's 1906 approximate concept, but is more precisely stated. Radiative exchange equilibrium Planck (1914, page 40) refers to a condition of thermodynamic equilibrium, in which "any two bodies or elements of bodies selected at random exchange by radiation equal amounts of heat with each other." The term radiative exchange equilibrium can also be used to refer to two specified regions of space that exchange equal amounts of radiation by emission and absorption (even when the steady state is not one of thermodynamic equilibrium, but is one in which some sub-processes include net transport of matter or energy including radiation). Radiative exchange equilibrium is very nearly the same as Prevost's relative radiative equilibrium. Approximate radiative exchange equilibrium To a first approximation, an example of radiative exchange equilibrium is in the exchange of non-window wavelength thermal radiation between the land-and-sea surface and the lowest atmosphere, when there is a clear sky. As a first approximation (W. C. Swinbank 1963, G. W. Paltridge and C. M. R. Platt 1976, pages 139–140), in the non-window wavenumbers, there is zero net exchange between the surface and the atmosphere, while, in the window wavenumbers, there is simply direct radiation from the land-sea surface to space. A like situation occurs between adjacent layers in the turbulently mixed boundary layer of the lower troposphere, expressed in the so-called "cooling to space approximation", first noted by C. D. Rodgers and C. D. Walshaw (1966). In astronomy and planetary science Global radiative equilibrium Global radiative equilibrium can be defined for an entire passive celestial system that does not supply its own energy, such as a planet. Liou (2002, page 459) and other authors use the term global radiative equilibrium to refer to radiative exchange equilibrium globally between Earth and extraterrestrial space; such authors intend to mean that, in the theoretical, incoming solar radiation absorbed by Earth's surface and its atmosphere would be equal to outgoing longwave radiation from Earth's surface and its atmosphere. Prevost would say then that the Earth's surface and its atmosphere regarded as a whole were in absolute radiative equilibrium. Some texts, for example Satoh (2004), simply refer to "radiative equilibrium" in referring to global exchange radiative equilibrium. Planetary equilibrium temperature The various global temperatures that may be theoretically conceived for any planet in general can be computed. Such temperatures include the planetary equilibrium temperature, equivalent blackbody temperature or effective radiation emission temperature of the planet. For a planet with an atmosphere, these temperatures can be different than the mean surface temperature, which may be measured as the global-mean surface air temperature, or as the global-mean surface skin temperature. A radiative equilibrium temperature is calculated for the case that the supply of energy from within the planet (for example, from chemical or nuclear sources) is negligibly small; this assumption is reasonable for Earth, but fails, for example, for calculating the temperature of Jupiter, for which internal energy sources are larger than the incident solar radiation, and hence the actual temperature is higher than the theoretical radiative equilibrium. Stellar equilibrium A star supplies its own energy from nuclear sources, and hence the temperature equilibrium cannot be defined in terms of incident energy only. Cox and Giuli (1968/1984) define 'radiative equilibrium' for a star, taken as a whole and not confining attention only to its atmosphere, when the rate of transfer as heat of energy from nuclear reactions plus viscosity to the microscopic motions of the material particles of the star is just balanced by the transfer of energy by electromagnetic radiation from the star to space. Note that this radiative equilibrium is slightly different from the previous usage. They note that a star that is radiating energy to space cannot be in a steady state of temperature distribution unless there is a supply of energy, in this case, energy from nuclear reactions within the star, to support the radiation to space. Likewise the condition that is used for the above definition of pointwise radiative equilibrium cannot hold throughout a star that is radiating: internally, the star is in a steady state of temperature distribution, not internal thermodynamic equilibrium. Cox and Giuli's definition allows them to say at the same time that a star is in a steady state of temperature distribution and is in 'radiative equilibrium'; they are assuming that all the radiative energy to space comes from within the star. Mechanisms When there is enough matter in a region to allow molecular collisions to occur very much more often than absorption or emission of photons, for radiation one speaks of local thermodynamic equilibrium (LTE). In this case, Kirchhoff's law of equality of radiative absorptivity and emissivity holds. Two bodies in radiative exchange equilibrium, each in its own local thermodynamic equilibrium, have the same temperature and their radiative exchange complies with the Stokes-Helmholtz reciprocity principle. References Thermodynamics
Radiative equilibrium
[ "Physics", "Chemistry", "Mathematics" ]
2,031
[ "Thermodynamics", "Dynamical systems" ]
26,830,333
https://en.wikipedia.org/wiki/Water-energy%20nexus
The water-energy nexus is the relationship between the water used for energy production, including both electricity and sources of fuel such as oil and natural gas, and the energy consumed to extract, purify, deliver, heat/cool, treat and dispose of water (and wastewater) sometimes referred to as the energy intensity (EI). Energy is needed in every stage of the water cycle from producing, moving, treating and heating water to collecting and treating wastewater. The relationship is not truly a closed loop as the water used for energy production need not be the same water that is processed using that energy, but all forms of energy production require some input of water making the relationship inextricable. Among the first studies to evaluate the water and energy relationship was a life-cycle analysis conducted by Peter Gleick in 1994 that highlighted the interdependence and initiated the joint study of water and energy. In 2014 the US Department of Energy (DOE) released their report on the water-energy nexus citing the need for joint water-energy policies and better understanding of the nexus and its susceptibility to climate change as a matter of national security. The hybrid Sankey diagram in the DOE's 2014 water-energy nexus report summarizes water and energy flows in the US by sector, demonstrating interdependence as well as singling out thermoelectric power as the single largest user of water, used mainly for cooling. Water used in the energy sector All types of power generation consume water either to process the raw materials used in the facility, constructing and maintaining the plant, or to just generate the electricity itself. Renewable power sources such as photovoltaic solar and wind power, which require little water to produce energy, require water in processing the raw materials to build. Water can either be used or consumed, and can be categorized as fresh, ground, surface, blue, grey or green among others. Water is considered used if it does not reduce the supply of water to downstream users, i.e. water that is taken and returned to the same source (instream use), such as in thermoelectric plants that use water for cooling and are by far the largest users of water. While used water is returned to the system for downstream uses, it has usually been degraded in some way, mainly due to thermal or chemical pollution, and the natural flow has been altered which does not factor into an assessment if only the quantity of water is considered. Water is consumed when it is removed completely from the system, such as by evaporation or consumption by crops or humans. When assessing water use all these factors must be considered as well as spatiotemporal considerations making precise determination of water use very difficult. According to the International Energy Agency (IEA), water stress also poses risks to the transport of fuels and materials. In 2022, droughts and severe heatwaves led to low water levels in key European rivers such as the Rhine, limiting barge transport of coal, chemicals and other materials. Spang et al. (2014) conducted a study looking at the water consumption for electricity production (WCEP) internationally that both showed the variation in energy types produced across countries as well as the vast differences in efficiency of power production per unit of water use (Figure 1). Operations of water distribution systems and power distribution systems under emergency conditions of limited power and water availability is an important consideration for improving the overall resilience of the water – energy nexus. Khatavkar and Mays (2017a) present a methodology for control of water distribution and power distribution systems under emergency conditions of drought and limited power availability to ascertain at least minimal supply of cooling water to the power plants. Khatavkar and Mays (2017) applied an optimization model for water – energy nexus system for a hypothetical regional level system which showed an improved resilience for several contingency scenarios. Increasingly controversial has been the use of water resources for hydraulic fracturing of shale gas and tight oil reserves. Many environmentalists are deeply concerned about the potential for such operations to exacerbate local water scarcity (since the water volumes required are large) and to produce considerable volumes of polluted water (both directly through pollution of fracking water, and indirectly through contamination of groundwater). With rising energy prices in North America and Europe in the 2020s it is likely that government and industry interest in hydraulic fracturing will grow. Energy intensity The operation of urban water systems requires substantial energy support. Key processes such as water transfer, consumption, and wastewater treatment consume significant amounts of energy, sparking discussions about the energy intensity and carbon emissions of water systems. US (California) In 2001, operating water systems in the US consumed approximately 3% of the total annual electricity (~75 TWh). The California's State Water Project (SWP) and Central Valley Project (CVP) are together the largest water system in the world with the highest water lift, over 2000 ft. across the Tehachapi Mountains, delivering water from the wetter and relatively rural north of the state, to the agriculturally intensive central valley, and finally to the arid and heavily populated south. Consequently, the SWP and CVP are the single largest consumers of electricity in California consuming approximately 5 TWh of electricity each per year. In 2001, 19% of the state's total electricity use (~48 TWh/year) was used in processing water, including end uses, with the urban sector accounting for 65% of this. In addition to electricity, 30% of California's natural gas consumption was due to water-related processes, mainly residential water heating, and 88 million gallons of diesel was consumed by groundwater pumps for agriculture. The residential sector alone accounted for 48% of the total combined electricity and natural gas consumed for water-related processes in the state. According to the California Public Utilities Commission (CPUC) Energy Division's Embedded Energy in Water Studies report:"'Energy Intensity' refers to the average amount of energy needed to transport or treat water or wastewater on a per unit basis." Energy intensity is sometimes used synonymous with embedded or embodied energy. In 2005, water deliveries to Southern California were assessed to have an average EI of 12.7 MWh/MG, nearly two-thirds of which was due to transportation. Following the findings that a fifth of California's electricity is consumed in water-related processes including end-use, the CPUC responded by authorising a statewide study into the relationship between energy and water that was conducted by the California Institute for Energy and Environment (CIEE), and developed programs to save energy through water conservation. Arab region According to the World Energy Outlook 2016, in the Middle East, the water sector's share of total electricity consumption is expected to increase from 9% in 2015 to 16% by 2040, because of a rise in desalination capacity. The Arab region which includes the following countries:  Kuwait, Lebanon, Libya, Mauretania, Morocco, Oman, Palestinian Territories, Algeria, Bahrain, Egypt, Iraq, Jordan, Qatar, Sudan, Saudi Arabia, Syria, Tunisia, the United Arab Emirates, and Yemen. Some general characteristics of the Arab region is that it is one of the most water stressed regions of the world, rain fall is mostly rare, or it rains in an unpredictable pattern.  The cumulative area of the Arab region is approximately 10.2% of the world's area, but the region only receives 2.1% of the world's average annual precipitation. Further, the region accommodates 0.3% of the world's annual renewable water resources (ACSAD 1997). Consequently, the region has experienced a declining fresh water supply per capita, roughly a shortage of 42 cubic kilometers of water demand. This shortage is expected to grow three times by 2030, and four times by 2050. This is crucially alarming given the world's economic stability highly depends on the Arab region. There are numerous methods to mitigate the growing gap of fresh water supply per capita. One applicable method is desalination which is ubiquitous particularly in the GCC region. All of the world's desalination capacity, approximately 50% is contained in the Arab region, and almost all of that 50% is held in the GCC countries. Countries such as Bahrain provides 79% of its fresh water through desalination, Qatar is around 75%, Kuwait around 70%, Saudi Arabia 15%, and the UAE about 67%. These Persian Gulf countries built enormous desalination plants to fulfill the water supply shortages as these countries have developed economically. Agriculture in the GCC region accounts for approximately 2% of its GDP however, it utilizes 80% of water produced. It should also be noted that it requires immense amount of energy mostly from oil to operate these desalination plants. Countries such as Saudi Arabia, Bahrain, and Kuwait will face difficulty to meet the demand for desalination if the current trend continues. The GCC spends 10–25% of its generated electric power to desalinate water. Hydroelectricity Hydroelectricity is a special case of water used for energy production mainly because hydroelectric power generation is regarded as being cleaner and renewable energy, and dams (the main source of hydroelectric production) serve multiple purposes besides energy generation, including flood prevention, storage, control and recreation which make justifiable allocation analyses difficult. Furthermore, the impacts of hydroelectric power generation can be hard to quantify both in terms of evaporative consumptive losses and altered quality of water, since damming results in flows that are much colder than for flowing streams. In some cases the moderation of flows can be seen as a rivalry of water use in time may also need to accounted for in impact analysis. Willingness to pay can be used as an estimate to determine the value of the cost. Retrofitting existing dams to produce electricity has been one approach to hydroelectricity. While using dams to produce electricity is seen as a cleaner form of energy, it does not come without its own challenges to the environment. Hydorelectric power has typically been seen as a lower carbon emission strategy to generating power; however, recent studies have been linked dams to greenhouse gas emissions. Galy-Lacaux et al conducted a study to measure the emissions produced by the Petit Saut Dam on the Sinnamary River in French Guyana for a two year period. The researchers found that About 10% of the carbon stored in soil and vegetation was released in gaseous form within 2 years. Water Availability Because of the shift in developing new renewable energy technologies, there is a new added stress to water availability. Renewable energy methods, such as biofuels, concentrating solar power (CSP), carbon capture, utilization and storage or nuclear power, are quite water intensive. Water scarcity has a huge impact on energy production and reliability. See also Climate and energy Water, energy and food security nexus References External links California's Water – Energy Relationship WaterEnergyNEXUS – Advanced Technologies and Best Practices Embedded Energy in Water Studies Study 1: Statewide and Regional Water-Energy Relationship Embedded Energy in Water Studies Study 2: Water Agency and Function Component Study and Embedded Energy- Water Load Profiles The Water-Energy Nexus: Challenges and Opportunities Thirsty Energy Water supply Water and the environment Energy
Water-energy nexus
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
2,303
[ "Hydrology", "Physical quantities", "Energy (physics)", "Energy", "Environmental engineering", "Water supply" ]
26,834,062
https://en.wikipedia.org/wiki/NKX-homeodomain%20factor
NKX-homeodomain factors are a family of homeodomain transcription factors that are critical regulators of organ development. Human genes that encode NKX-homeodomain factors include: NKX1-1, NKX1-2 NKX2-1, NKX2-2, NKX2-4, NKX2-8 NKX3-1, NKX3-2 NKX6-1, NKX6-2, NKX6-3 References Developmental genes and proteins Transcription factors
NKX-homeodomain factor
[ "Chemistry", "Biology" ]
112
[ "Transcription factors", "Gene expression", "Signal transduction", "Developmental genes and proteins", "Induced stem cells" ]
26,834,979
https://en.wikipedia.org/wiki/Condensation%20cloud
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air. When a nuclear weapon or high explosive is detonated in sufficiently humid air, the "negative phase" of the shock wave causes a rarefaction of the air surrounding the explosion but not of the air contained within it. The rarefied air is temporarily cooled, which causes condensation of some of the water vapor within the rarefied air. When the pressure and temperature return to normal, the Wilson cloud dissipates. Mechanism Since heat does not leave the affected air mass, the change of pressure following a detonation is adiabatic, with an associated change of temperature. In humid air, the drop in temperature in the most rarefied portion of the shock wave can bring the air temperature below its dew point, at which moisture condenses to form a visible cloud of microscopic water droplets. Since the pressure effect of the wave is reduced by its expansion (the same pressure effect is spread over a larger radius), the vapor effect also has a limited radius. Such vapor can also be seen in low pressure regions during high–g subsonic maneuvers of aircraft in humid conditions. Occurrence Nuclear weapons testing Scientists observing the Operation Crossroads nuclear tests in 1946 at Bikini Atoll named that transitory cloud a "Wilson cloud" because the same pressure effect is employed in a Wilson cloud chamber to let condensation mark the tracks of electrically-charged sub-atomic particles. Analysts of later nuclear bomb tests used the more general term condensation cloud. The shape of the shock wave (influenced by different speed in different altitudes), and the temperature and humidity of different atmospheric layers determine the appearance of the Wilson clouds. During nuclear tests, condensation rings around or above the fireball are commonly observed. Rings around the fireball may become stable and form rings around the rising stem of the mushroom cloud. The lifetime of the Wilson cloud during nuclear air bursts can be shortened by the thermal radiation from the fireball, which heats the cloud above to the dew point and evaporates the droplets. Non-nuclear explosions Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast. Aircraft and rockets The same kind of condensation cloud is sometimes seen above the wings of aircraft in a moist atmosphere. The top of a wing has a reduction of air pressure as part of the process of generating lift. This reduction in air pressure causes a cooling and the condensation of water vapor. Hence, small, transient clouds appear. The vapor cone of a transonic aircraft or rocket on ascent is another example of a condensation cloud. See also Rope trick effect Contrail References Physical phenomena Explosions Cloud types Shock waves
Condensation cloud
[ "Physics", "Chemistry" ]
589
[ "Waves", "Physical phenomena", "Shock waves", "Explosions" ]
26,835,075
https://en.wikipedia.org/wiki/IRX1
Iroquois-class homeodomain protein IRX-1, also known as Iroquois homeobox protein 1, is a protein that in humans is encoded by the IRX1 gene. All members of the Iroquois (IRO) family of proteins share two highly conserved features, encoding both a homeodomain and a characteristic IRO sequence motif. Members of this family are known to play numerous roles in early embryo patterning. IRX1 has also been shown to act as a tumor suppressor gene in several forms of cancer. Role in development IRX1 is a member of the Iroquois homeobox gene family. Members of this family play multiple roles during pattern formation in embryos of numerous vertebrate and invertebrate species. IRO genes are thought to function early in development to define large territories, and again later in development for further patterning specification. Experimental data suggest roles for IRX1 in vertebrates may include development and patterning of lungs, limbs, heart, eyes, and nervous system. Gene Overview IRX1 is located on the forward DNA strand (see Sense (molecular biology)) of chromosome 5, from position 3596054 - 3601403 at the 5p15.3 location. The human gene product is a 1858 base pair mRNA with 4 predicted exons in humans. Promoter analysis was performed using El Dorado through the Genomatix software page. The predicted promoter region spans 1040 base pairs from position 3595468 through 3595468 on the forward strand of chromosome 5. Gene neighborhood IRX1 is relatively isolated, with no other protein coding genes found from position 3177835 – 5070004. Expression Microarray and RNA seq data suggest that IRX1 is ubiquitously expressed at low levels in adult tissues, with the highest relative levels of expression occurring in the heart, adipose, kidney, and breast tissues. Moderate to high levels are also indicated in the lung, prostate and stomach. Promoter analysis with the El Dorado program from Genomatix predicted that IRX1 expression is regulated by factors that include E2F cell cycle regulators, NRF1, and ZF5, and brachyury. Expression data from human, mouse, and developing mouse brains are available though the Allen Brain Atlas. Protein Properties and characteristics The mature IRX1 protein has 480 amino acid residues, with a molecular mass of 49,600 daltons and an isoelectric point of 5.7. A BLAST search revealed that IRX1 contains two highly conserved domains: a homeodomain and a characteristic IRO motif of unknown function. The homeodomain belongs to the TALE (three amino acid loop extension) class of homeodomains, and is characterized by the addition of three extra amino acids between the first and second helix of three alpha helices that comprise the domain. The presence of this well characterized homeodomain strongly suggests that IRX1 acts as a transcription factor. This is further supported by the predicted localization of IRX1 to the nucleus. The IRO motif is a region downstream of the homeodomain that is found only in members of the Iroquois-class homeodomain proteins, though its function is poorly understood. However, its similarity to an internal region of the Notch receptor protein suggests that it may be involved with protein-protein interaction. In addition to these two characteristic domains, IRX1 contains a third domain from the HARE-HTH superfamily fused to the C-terminal end of the homeodomain. This domain adopts a winged helix-turn-helix fold predicted to bind DNA, and is thought to play a role in recruiting effector activities to DNA. Several forms of post-translational modification are predicted, including SUMOylation, C-mannosylation, and phosphorylation, using bioinformatics tools from ExPASy. Bioinformatic analysis of IRX1 with the NetPhos tool predicted 71 potential phosphorylation sites throughout the protein. Protein Interactions Potential protein interacting partners for IRX1 were found using computational tools. The STRING database lists nine putative interacting partners supported by text mining evidence, though closer analysis of the results shows little support for most of these predicted interactions. However, it is possible that one of these proteins, CDKN1A, is involved in the predicted regulation of IRX1 by E2F cell cycle regulators. Conservation Orthologs IRX1 has a high degree of conservation across vertebrate and invertebrate species. The entire protein is more fully conserved through vertebrate species, while only the homeodomain and IRO motif are conserved in more distant homologs. Homologous sequences were found in species as distantly related to humans as the pig roundworm Ascaris suum, from the family Ascarididae, using BLAST and the ALIGN tool through the San Diego Super Computer Biology Workbench. The following is a table describing the evolutionary conservation of IRX1. Paralogs IRX1 is one of six members of the Iroquois-class homeodomain proteins found in humans: IRX2, IRX3, IRX4, IRX5, and IRX6. IRX1, IRX2, and IRX4 are found on human chromosome 5, and their orientation corresponds to that of IRX3, IRX5, and IRX6 found on human chromosome 16. It is thought that the genomic organization of IRO genes in conserved gene clusters allows for coregulation and enhancer sharing during development. References Further reading Transcription factors
IRX1
[ "Chemistry", "Biology" ]
1,155
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
26,836,721
https://en.wikipedia.org/wiki/Antoine%20Nicolas%20Duchesne
Antoine Nicolas Duchesne (born 7 October 1747 Versailles; died 18 February 1827 Paris) was a French botanist known for his keen observation of variation within species, and for demonstrating that species are not immutable, because mutations can occur. "As Duchesne's observations were unaided by knowledge of modern concepts of genetics and molecular biology, his insight was truly remarkable." His particular interests were in strawberries and gourds. Duchesne worked in the gardens of Versailles, where he was a student of Bernard de Jussieu and corresponded with Carl Linnaeus. He established a notable collection of strawberries in the botanical garden of the Petit Trianon and was the first to document the separation of sexes in wild strawberry and the hybrid origin of the garden strawberry. The genus Duchesnea Sm. (Rosaceae) was named after him. Works (selected) Manuel de botanique, contenant les propriétés des plantes utiles, 1764 Essai sur l’histoire naturelle des courges, 46 pp. Panckoucke, Paris 1786. Histoire naturelle des fraisiers contenant les vues d'économie réunies à la botanique et suivie de remarques particulières sur plusieurs points qui ont rapport à l'histoire naturelle générale, Didot jeune, Paris 1766. on line at Bayerische Staatsbibliothek and GoogleBooks Le Jardinier prévoyant, contenant par forme de tableau, le rapport des opérations journalières avec le temps des récoltes successives qu'elles préparent. 11 vols. P. F. Didot jeune, Paris 1770-1781 Sur la formation des jardins, Dorez, Paris 1775. Le Porte-feuille des enfans, mélange intéressant d'animaux, fruits, fleurs, habillemens, plans, cartes et autres objets.... Mérigot jeune, Paris, [n.d., probably 1784]. Le Livret du ″Porte-feuille des enfans″, à l'usage des écoles... d'après la loi du 11 germinal an IV. Imprimerie de Gueffier, Paris, an VI – 1797. Le Cicerone de Versailles, ou l'Indicateur des curiosités et des établissemens de cette ville.... J.-P. Jacob, Versailles, an XII — 1804; revised and augmented in 1815. Sources Adrien Davy de Virville (ed.) (1955) Histoire de la botanique en France. Paris: SEDES 394 p. Further reading Günter Staudt (2003), Les dessins d'Antoine Nicolas Duchesne pour son histoire naturelles des fraisiers / A.N. Duchesne's drawings for his Histoire naturelle des fraisiers. Publications scientifiques du Muséum national d'histoire naturelle (Paris) : 370 p. (coll. Des Planches et des Mots 1) . Foreword of Michel Chauvet on Pl@ntUse. Harry S. Paris (2007), The drawings of Antoine Nicolas Duchesne for his Natural History of the Gourds / Les dessins d'Antoine Nicolas Duchesne pour son histoire naturelle des courges. Publications scientifiques du Muséum national d'histoire naturelle (Paris) : 454 p. (coll. Des Planches et des Mots 4, ed. Christian Erard.) . Foreword of Michel Chauvet on Pl@ntUse. References 1747 births 1827 deaths 19th-century French botanists Proto-evolutionary biologists 18th-century French botanists
Antoine Nicolas Duchesne
[ "Biology" ]
801
[ "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
26,839,222
https://en.wikipedia.org/wiki/Antimicrobial%20properties%20of%20copper
Copper and its alloys (brasses, bronzes, cupronickel, copper-nickel-zinc, and others) are natural antimicrobial materials. Ancient civilizations exploited the antimicrobial properties of copper long before the concept of microbes became understood in the nineteenth century. In addition to several copper medicinal preparations, it was also observed centuries ago that water contained in copper vessels or transported in copper conveyance systems was of better quality (i.e., no or little visible slime or biofouling formation) than water contained or transported in other materials. The antimicrobial properties of copper are still under active investigation. Molecular mechanisms responsible for the antibacterial action of copper have been a subject of intensive research. Scientists are also actively demonstrating the intrinsic efficacy of copper alloy "touch surfaces" to destroy a wide range of microorganisms that threaten public health. Mechanisms of action In 1852 Victor Burq discovered those working with copper had far fewer deaths to cholera than anyone else, and did extensive research confirming this. In 1867 he presented his findings to the French Academies of Science and Medicine, informing them that putting copper on the skin was effective at preventing someone from getting cholera. The oligodynamic effect was discovered in 1893 as a toxic effect of metal ions on living cells, algae, molds, spores, fungi, viruses, prokaryotic, and eukaryotic microorganisms, even in relatively low concentrations. This antimicrobial effect is shown by ions of copper as well as mercury, silver, iron, lead, zinc, bismuth, gold, and aluminium. In 1973, researchers at Battelle Columbus Laboratories conducted a comprehensive literature, technology, and patent search that traced the history of understanding the "bacteriostatic and sanitizing properties of copper and copper alloy surfaces", which demonstrated that copper, in very small quantities, has the power to control a wide range of molds, fungi, algae, and harmful microbes. Of the 312 citations mentioned in the review across the time period 1892–1973, the observations below are noteworthy: Copper inhibits Actinomucor elegans, Aspergillus niger, Bacterium linens, Bacillus megaterium, Bacillus subtilis, Brevibacterium erythrogenes, Candida utilis, Penicillium chrysogenum, Rhizopus niveus, Saccharomyces mandshuricus, and Saccharomyces cerevisiae in concentrations above 10 g/L. Candida utilis (formerly, Torulopsis utilis) is completely inhibited at 0.04 g/L copper concentrations. Tubercle bacillus is inhibited by copper as simple cations or complex anions in concentrations from 0.02 to 0.2 g/L. Achromobacter fischeri and Photobacterium phosphoreum growth is inhibited by metallic copper. Paramecium caudatum cell division is reduced by copper plates placed on Petri dish covers containing infusoria and nutrient media. Poliovirus is inactivated within ten minutes of exposure to copper with ascorbic acid. A subsequent paper probed some of copper's antimicrobial mechanisms and cited no fewer than 120 investigations into the efficacy of copper's action on microbes. The authors noted that the antimicrobial mechanisms are very complex and take place in many ways, both inside cells and in the interstitial spaces between cells. Examples of some of the molecular mechanisms noted by various researchers include the following: The 3-dimensional structure of proteins can be altered by copper, so that the proteins can no longer perform their normal functions. The result is inactivation of bacteria or viruses. Copper complexes form radicals that inactivate viruses. Copper may disrupt enzyme structures, and functions by binding to sulfur- or carboxylate-containing groups and amino groups of proteins. Copper may interfere with other essential elements, such as zinc and iron. Copper facilitates deleterious activity in superoxide radicals. Repeated redox reactions on site-specific macromolecules generate HO• radicals, thereby causing "multiple hit damage" at target sites. Copper can interact with lipids, causing their peroxidation and opening holes in the cell membranes, thereby compromising the integrity of cells. This can cause leakage of essential solutes, which in turn, can have a desiccating effect. Copper damages the respiratory chain in Escherichia coli cells. and is associated with impaired cellular metabolism. Faster corrosion correlates with faster inactivation of microorganisms. This may be due to increased availability of cupric ion, Cu2+, which is believed to be responsible for the antimicrobial action. In inactivation experiments on the flu strain, H1N1, which is nearly identical to the H5N1 avian strain and the 2009 H1N1 (swine flu) strain, researchers hypothesized that copper's antimicrobial action probably attacks the overall structure of the virus and therefore has a broad-spectrum effect. Microbes require copper-containing enzymes to drive certain vital chemical reactions. Excess copper, however, can affect proteins and enzymes in microbes, thereby inhibiting their activities. Researchers believe that excess copper has the potential to disrupt cell function both inside cells and in the interstitial spaces between cells, probably acting on the outer envelope of cells. Currently, researchers believe that the most important antimicrobial mechanisms for copper are as follows: Elevated copper levels inside a cell causes oxidative stress and the generation of hydrogen peroxide. Under these conditions, copper participates in the so-called Fenton-type reaction — a chemical reaction causing oxidative damage to cells. Excess copper causes a decline in the membrane integrity of microbes, leading to leakage of specific essential cell nutrients, such as potassium and glutamate. This leads to desiccation and subsequent cell death. While copper is needed for many protein functions, in an excess situation (as on a copper alloy surface), copper binds to proteins that do not require copper for their function. This "inappropriate" binding leads to loss-of-function of the protein, and/or breakdown of the protein into nonfunctional portions. These potential mechanisms and others are the subject of continuing study by academic research laboratories worldwide. Antimicrobial efficacy of copper alloy touch surfaces Copper alloy surfaces have intrinsic properties that destroy many microorganisms. In the interest of protecting public health, especially in healthcare environments with their susceptible patient populations, an abundance of peer-reviewed antimicrobial efficacy studies have been conducted in the past ten years regarding copper's efficacy to destroy E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Clostridioides difficile, influenza A virus, adenovirus, and fungi. Stainless steel was also investigated because it is an important surface material in today's healthcare environments. The studies cited here, plus others directed by the United States Environmental Protection Agency, resulted in the 2008 registration of 274 different copper alloys as certified antimicrobial materials that have public health benefits. E. coli E. coli O157:H7 is a potent, highly infectious, ACDP (Advisory Committee on Dangerous Pathogens, UK) Hazard Group 3 foodborne and waterborne pathogen. The bacterium produces potent toxins that cause diarrhea, severe aches, and nausea in infected persons. Symptoms of severe infections include hemolytic colitis (bloody diarrhea), hemolytic uremic syndrome (kidney disease), and death. E. coli O157:H7 has become a serious public health threat because of its increased incidence and because children up to 14 years of age, the elderly, and immunocompromised individuals are at risk of incurring the most severe symptoms. Efficacy on copper surfaces Recent studies have shown that copper alloy surfaces kill E. coli O157:H7. More than 99.9% of E. coli microbes are killed after just 1–2 hours on copper. On stainless steel surfaces, the microbes can survive for weeks. Results of E. coli O157:H7 destruction on an alloy containing 99.9% copper (C11000) demonstrate that this pathogen is rapidly and almost completely killed (more than 99.9% kill rate) within ninety minutes at room temperature (20 °C). At chill temperatures (4 °C), more than 99.9% of E. coli O157:H7 are killed within 270 minutes. E. coli O157:H7 destruction on several copper alloys containing 99%–100% copper (including C10200, C11000, C18080, and C19700) at room temperature begins within minutes. At chilled temperatures, the inactivation process takes about an hour longer. No significant reduction in the amount of viable E. coli O157:H7 occurs on stainless steel after 270 minutes. Studies have examined the E. coli O157:H7 bactericidal efficacies on 25 different copper alloys to identify those alloys that provide the best combination of antimicrobial activity, corrosion/oxidation resistance, and fabrication properties. Copper's antibacterial effect was found to be intrinsic in all of the copper alloys tested. As in previous studies, no antibacterial properties were observed on stainless steel (UNS S30400). Also, in confirmation with earlier studies, the rate of drop-off of E. coli O157:H7 on the copper alloys is faster at room temperature than at chill temperature. For the most part, the bacterial kill rate of copper alloys increased with increasing copper content of the alloy. This is further evidence of copper's intrinsic antibacterial properties. Efficacy on brass, bronze, copper-nickel alloys Brasses, which were frequently used for doorknobs and push plates in decades past, also demonstrate bactericidal efficacies, but within a somewhat longer time frame than pure copper. All nine brasses tested were almost completely bactericidal (more than 99.9% kill rate) at 20 °C within 60–270 minutes. Many brasses were almost completely bactericidal at 4 °C within 180–360 minutes. The rate of total microbial death on four bronzes varied from within 50–270 minutes at 20 °C, and from 180 to 270 minutes at 4 °C. The kill rate of E. coli O157 on copper-nickel alloys increased with increasing copper content. Zero bacterial counts at room temperature were achieved after 105–360 minutes for five of the six alloys. Despite not achieving a complete kill, alloy C71500 achieved a 4-log drop within the six-hour test, representing a 99.99% reduction in the number of live organisms. Efficacy on stainless steel Unlike copper alloys, stainless steel (S30400) does not exhibit any degree of bactericidal properties against E. coli O157:H7. This material, which is one of the most common touch surface materials in the healthcare industry, allows toxic E. coli O157:H7 to remain viable for weeks. Near-zero bacterial counts are not observed even after 28 days of investigation. Epifluorescence photographs have demonstrated that E. coli O157:H7 is almost completely killed on copper alloy C10200 after just 90 minutes at 20 °C; whereas a substantial number of pathogens remain on stainless steel S30400. MRSA Methicillin-resistant Staphylococcus aureus (MRSA) is a dangerous bacteria strain because it is resistant to beta-lactam antibiotics. Recent strains of the bacteria, EMRSA-15 and EMRSA-16, are highly transmissible and durable. This is of extreme importance to those concerned with reducing the incidence of hospital-acquired MRSA infections. In 2008, after evaluating a wide body of research mandated specifically by the United States Environmental Protection Agency (EPA), registration approvals were granted by EPA in 2008 granting that copper alloys kill more than 99.9% of MRSA within two hours. Subsequent research conducted at the University of Southampton (UK) compared the antimicrobial efficacies of copper and several non-copper proprietary coating products to kill MRSA. At 20 °C, the drop-off in MRSA organisms on copper alloy C11000 is dramatic and almost complete (more than 99.9% kill rate) within 75 minutes. However, neither a triclosan-based product nor two silver-based antimicrobial treatments (Ag-A and Ag-B) exhibited any meaningful efficacy against MRSA. Stainless steel S30400 did not exhibit any antimicrobial efficacy. In 2004, the University of Southampton research team was the first to clearly demonstrate that copper inhibits MRSA. On copper alloys — C19700 (99% copper), C24000 (80% copper), and C77000 (55% copper) — significant reductions in viability were achieved at room temperatures after 1.5 hours, 3.0 hours, and 4.5 hours, respectively. Faster antimicrobial efficacies were associated with higher copper alloy content. Stainless steel did not exhibit any bactericidal benefits. Clostridioides difficile Clostridioides difficile, an anaerobic bacterium, is a major cause of potentially life-threatening disease, including nosocomial diarrheal infections, especially in developed countries. C. difficile endospores can survive up to five months on surfaces. The pathogen is frequently transmitted by the hands of healthcare workers in hospital environments. C. difficile is currently a leading hospital-acquired infection in the UK, and rivals MRSA as the most common organism to cause hospital-acquired infections in the U.S. It is responsible for a series of intestinal health complications, often referred to collectively as Clostridioides difficile Associated Disease (CDAD). The antimicrobial efficacy of various copper alloys against Clostridioides difficile was recently evaluated. The viability of C. difficile spores and vegetative cells were studied on copper alloys C11000 (99.9% copper), C51000 (95% copper), C70600 (90% copper), C26000 (70% copper), and C75200 (65% copper). Stainless steel (S30400) was used as the experimental control. The copper alloys significantly reduced the viability of both C. difficile spores and vegetative cells. On C75200, near total kill was observed after one hour (however, at six hours total C. difficile increased, and decreased slower afterward). On C11000 and C51000, near total kill was observed after three hours, then total kill in 24 hours on C11000 and 48 hours on C51000. On C70600, near total kill was observed after five hours. On C26000, near total kill was achieved after 48 hours. On stainless steel, no reductions in viable organisms were observed after 72 hours (three days) of exposure and no significant reduction was observed within 168 hours (one week). Influenza A Influenza, commonly known as flu, is an infectious disease from a viral pathogen different from the one that produces the common cold. Symptoms of influenza, which are much more severe than the common cold, include fever, sore throat, muscle pains, severe headache, coughing, weakness, and general discomfort. Influenza can cause pneumonia, which can be fatal, particularly in young children and the elderly. After incubation for one hour on copper, active influenza A virus particles were reduced by 75%. After six hours, the particles were reduced on copper by 99.999%. Influenza A virus was found to survive in large numbers on stainless steel. Once surfaces are contaminated with virus particles, fingers can transfer particles to up to seven other clean surfaces. Because of copper's ability to destroy influenza A virus particles, copper can help to prevent cross-contamination of this viral pathogen. Adenovirus Adenovirus is a group of viruses that infect the tissue lining membranes of the respiratory and urinary tracts, eyes, and intestines. Adenoviruses account for about 10% of acute respiratory infections in children. These viruses are a frequent cause of diarrhea. In a recent study, 75% of adenovirus particles were inactivated on copper (C11000) within one hour. Within six hours, 99.999% of the adenovirus particles were inactivated. Within six hours, 50% of the infectious adenovirus particles survived on stainless steel. Fungi The antifungal efficacy of copper was compared to aluminium on the following organisms that can cause human infections: Aspergillus spp., Fusarium spp., Penicillium chrysogenum, Aspergillus niger and Candida albicans. An increased die-off of fungal spores was found on copper surfaces compared with aluminium. Aspergillus niger growth occurred on the aluminium coupons growth was inhibited on and around copper coupons. See also Antimicrobial copper-alloy touch surfaces Copper alloys in aquaculture Copper-silver ionization Medical uses of silver Oligodynamic effect References Antimicrobials Copper in health
Antimicrobial properties of copper
[ "Chemistry", "Biology" ]
3,631
[ "Biology and pharmacology of chemical elements", "Copper in health", "Antimicrobials", "Biocides" ]
20,975,192
https://en.wikipedia.org/wiki/Branch%20migration
Branch migration is the process by which base pairs on homologous DNA strands are consecutively exchanged at a Holliday junction, moving the branch point up or down the DNA sequence. Branch migration is the second step of genetic recombination, following the exchange of two single strands of DNA between two homologous chromosomes. The process is random, and the branch point can be displaced in either direction on the strand, influencing the degree of which the genetic material is exchanged. Branch migration can also be seen in DNA repair and replication, when filling in gaps in the sequence. It can also be seen when a foreign piece of DNA invades the strand. Mechanism The mechanism for branch migration differs between prokaryotes and eukaryotes. Prokaryotes The mechanism for prokaryotic branch migration has been studied many times in Escherichia coli. In E. coli, the proteins RuvA and RuvB come together and form a complex that facilitates the process in a number of ways. RuvA is a tetramer and binds to the DNA at the Holliday junction when it is in the open X form. The protein binds in a way that the DNA entering/departing the junction is still free to rotate and slide through. RuvA has a domain with acidic amino acid residues that interfere with the base pairs in the centre of the junction. This forces the base pairs apart so that they can re-anneal with base pairs on the homologous strands. In order for migration to occur, RuvA must be associated with RuvB and ATP. RuvB has the ability to hydrolyze ATP, driving the movement of the branch point. RuvB is a hexamer with helicase activity, and also binds the DNA. As ATP is hydrolyzed, RuvB rotates the recombined strands while pulling them out of the junction, but does not separate the strands as helicase would. The final step in branch migration is called resolution and requires the protein RuvC. The protein is a dimer, and will bind to the Holliday junction when it takes on the stacked X form. The protein has endonuclease activity, and cleaves the strands at exactly the same time. The cleavage is symmetrical, and gives two recombined DNA molecules with single stranded breaks. Eukaryotes The eukaryotic mechanism is much more complex involving different and additional proteins, but follows the same general path. Rad54, a highly conserved eukaryotic protein, is reported to oligomerize on Holliday junctions to promote branch migration. Archaea A helicase (designated Saci-0814) isolated from the thermophilic crenarchaeon Sulfolobus acidocaldarius dissociated DNA Holliday junction structures, and showed branch migration activity in vitro. In a S. acidocaldarius strain deleted for Saci-0814, the homologous recombination frequency was reduced five-fold compared to the parental strain indicating that Saci-0814 is involved in homologous recombination in vivo. Based on this evidence it appears that Saci-0814 is employed in homologous recombination in S. acidocaldarius and functions as a branch migration helicase. Homologous recombination appears to be an important adaptation in hyperthermophiles, such as S. acidocaldarius, for efficiently repairing DNA damage. Helicase Saci-0814 is classified as an aLhr1 (archaeal long helicase related 1) under superfamily 2 helicases, and its homologs are conserved among the archaea. Control The rate of branch migration is dependent on the amount of divalent ions, specifically magnesium ions (Mg2+), present during recombination. The ions determine which structure the Holliday junction will adopt, as they play a stabilizing role. When the ions are absent, the backbones repel each other and the junction takes on the open X structure. In this condition, migration is optimal and the junction will be free to move up and down the strands. When the ions are present, they neutralize the negatively charged backbone. This allows the strands to move closer together and the junction adopts the stacked X structure. It is during this state that resolution will be optimal, allowing RuvC to bind to the junction. References Cellular processes Molecular genetics
Branch migration
[ "Chemistry", "Biology" ]
918
[ "Molecular genetics", "Cellular processes", "Molecular biology" ]
20,975,307
https://en.wikipedia.org/wiki/Duality%20%28mechanical%20engineering%29
In mechanical engineering, many terms are associated into pairs called duals. A dual of a relationship is formed by interchanging force (stress) and deformation (strain) in an expression. Here is a partial list of mechanical dualities: force — deformation stress — strain stiffness method — flexibility method Examples Constitutive relation stress and strain (Hooke's law.) See also Duality (electrical circuits) Hydraulic analogy List of dualities Mechanical–electrical analogies Series and parallel springs References Fung, Y. C., A First Course in CONTINUUM MECHANICS, 2nd edition, Prentice-Hall, Inc. 1977 Mechanical engineering Mechanical engineering
Duality (mechanical engineering)
[ "Physics", "Mathematics", "Engineering" ]
132
[ "Mathematical structures", "Applied and interdisciplinary physics", "Category theory", "Duality theories", "Mechanical engineering", "Geometry" ]
20,978,769
https://en.wikipedia.org/wiki/Bulging%20factor
Bulging factor is an engineering term describing the geometry of out-of plane deformations of the surface of a crack on a pressurized fuselage structure. It is used in evaluating the damage tolerance of airframe fuselages. The single curved geometry and pressure differential causes a longitudinal crack to bulge out or protrude from the original shape. This change in geometry, or “bulging effect”, significantly increases the stress intensity factor at the crack tips. The effects of this loading condition can trigger different types of failure mechanisms. For the case of unstiffened shell structures, the bulging factor can be defined as the ratio of stress-intensity (SIF) of a curved shell to the stress-intensity factor of a flat panel: The representation of this phenomenon becomes rather complex due to the biaxial and internal pressure load and structural configuration. References Lazghab Tarek, Fayza Ayari, Lotfi Chelbi. Crack growth in cylindrical aluminum shells with inner reinforcing foam layer. Springer, 2006. pp. 151. Pressure vessels Fracture mechanics
Bulging factor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
215
[ "Structural engineering", "Fracture mechanics", "Chemical equipment", "Materials science", "Physical systems", "Hydraulics", "Materials degradation", "Pressure vessels" ]
4,883,003
https://en.wikipedia.org/wiki/Systems%20modeling%20language
The systems modeling language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities. Contrast with UML SysML offers several systems engineering specific improvements over UML, which has been developed as a software modeling language. These improvements include the following: SysML's diagrams express system engineering concepts better due to the removal of UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities. SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs. SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis. SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE Recommended Practice for Architectural Description of Software Intensive Systems). SysML reuses seven of UML 2's fourteen "nominative" types of diagrams, and adds two diagrams (requirement and parametric diagrams) for a total of nine diagram types. SysML also supports allocation tables, a tabular format that can be dynamically derived from SysML allocation relationships. A table which compares SysML and UML 2 diagrams is available in the SysML FAQ. Consider modeling an automotive system: with SysML one can use Requirement diagrams to efficiently capture functional, performance, and interface requirements, whereas with UML one is subject to the limitations of use case diagrams to define high-level functional requirements. Likewise, with SysML one can use Parametric diagrams to precisely define performance and quantitative constraints like maximum acceleration, minimum curb weight, and total air conditioning capacity. UML provides no straightforward mechanism to capture this sort of essential performance and quantitative information. Concerning the rest of the automotive system, enhanced activity diagrams and state machine diagrams can be used to specify the embedded software control logic and information flows for the on-board automotive computers. Other SysML structural and behavioral diagrams can be used to model factories that build the automobiles, as well as the interfaces between the organizations that work in the factories. History The SysML initiative originated in a January 2001 decision by the International Council on Systems Engineering (INCOSE) Model Driven Systems Design workgroup to customize the UML for systems engineering applications. Following this decision, INCOSE and the Object Management Group (OMG), which maintains the UML specification, jointly chartered the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. The SE DSIG, with support from INCOSE and the ISO AP 233 workgroup, developed the requirements for the modeling language, which were subsequently issued by the OMG parting in the UML for Systems Engineering Request for Proposal (UML for SE RFP; OMG document ad/03-03-41) in March 2003. In 2003 David Oliver and Sanford Friedenthal of INCOSE requested that Cris Kobryn, who successfully led the UML 1 and UML 2 language design teams, lead their joint effort to respond to the UML for SE RFP. As Chair of the SysML Partners, Kobryn coined the language name "SysML" (short for "Systems Modeling Language"), designed the original SysML logo, and organized the SysML Language Design team as an open source specification project. Friedenthal served as Deputy Chair, and helped organize the original SysML Partners team. In January 2005, the SysML Partners published the SysML v0.9 draft specification. Later, in August 2005, Friedenthal and several other original SysML Partners left to establish a competing SysML Submission Team (SST). The SysML Partners released the SysML v1.0 Alpha specification in November 2005. OMG SysML After a series of competing SysML specification proposals, a SysML Merge Team was proposed to the OMG in April 2006. This proposal was voted upon and adopted by the OMG in July 2006 as OMG SysML, to differentiate it from the original open source specification from which it was derived. Because OMG SysML is derived from open source SysML, it also includes an open source license for distribution and use. The OMG SysML v. 1.0 specification was issued by the OMG as an Available Specification in September 2007. The current version of OMG SysML is v1.6, which was issued by the OMG in December 2019. In addition, SysML was published by the International Organization for Standardization (ISO) in 2017 as a full International Standard (IS), ISO/IEC 19514:2017 (Information technology -- Object management group systems modeling language). The OMG has been working on the next generation of SysML and issued a Request for Proposals (RFP) for version 2 on December 8, 2017, following its open standardization process. The resulting specification, which will incorporate language enhancements from experience applying the language, will include a UML profile, a metamodel, and a mapping between the profile and metamodel. A second RFP for a SysML v2 Application Programming Interface (API) and Services RFP was issued in June 2018. Its aim is to enhance the interoperability of model-based systems engineering tools. Diagrams SysML includes 9 types of diagram, some of which are taken from UML. Activity diagram Block definition diagram Internal block diagram Package diagram Parametric diagram Requirement diagram Sequence diagram State machine diagram Use case diagram Tools There are several modeling tool vendors offering SysML support. Lists of tool vendors who support SysML or OMG SysML can be found on the SysML Forum or SysML websites, respectively. Model exchange As an OMG UML 2.0 profile, SysML models are designed to be exchanged using the XML Metadata Interchange (XMI) standard. In addition, architectural alignment work is underway to support the ISO 10303 (also known as STEP, the Standard for the Exchange of Product model data) AP-233 standard for exchanging and sharing information between systems engineering software applications and tools. See also SoaML Energy systems language Object process methodology Universal Systems Language List of SysML tools References Further reading External links Introduction to Systems Modeling Language (SysML), Part 1 and Part 2. YouTube. SysML Open Source Specification Project Provides information related to SysML open source specifications, FAQ, mailing lists, and open source licenses. OMG SysML Website Furnishes information related to the OMG SysML specification, SysML tutorial, papers, and tool vendor information. Article "EE Times article on SysML (May 8, 2006)" SE^2 MBSE Challenge team: "Telescope Modeling" Paper "System Modelling Language explained" (PDF format) Bruce Douglass: Real-Time Agile Systems and Software Development List of Popular SysML Modeling Tools Unified Modeling Language Systems engineering Modeling languages
Systems modeling language
[ "Engineering" ]
1,665
[ "Systems engineering", "Systems Modeling Language" ]
4,883,158
https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20theorem
In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found. The theorem also generalizes to the -gamma function. Statement of the theorem For every there is no non-zero polynomial such that where is the gamma function. For example, define by Then the equation is called an algebraic differential equation, which, in this case, has the solutions and — the Bessel functions of the first and second kind respectively. Hence, we say that and are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, , is not differentially algebraic and is therefore transcendentally transcendental. Proof Let and assume that a non-zero polynomial exists such that As a non-zero polynomial in can never give rise to the zero function on any non-empty open domain of (by the fundamental theorem of algebra), we may suppose, without loss of generality, that contains a monomial term having a non-zero power of one of the indeterminates . Assume also that has the lowest possible overall degree with respect to the lexicographic ordering For example, because the highest power of in any monomial term of the first polynomial is smaller than that of the second polynomial. Next, observe that for all we have: If we define a second polynomial by the transformation then we obtain the following algebraic differential equation for : Furthermore, if is the highest-degree monomial term in , then the highest-degree monomial term in is Consequently, the polynomial has a smaller overall degree than , and as it clearly gives rise to an algebraic differential equation for , it must be the zero polynomial by the minimality assumption on . Hence, defining by we get Now, let in to obtain A change of variables then yields and an application of mathematical induction (along with a change of variables at each induction step) to the earlier expression reveals that This is possible only if is divisible by , which contradicts the minimality assumption on . Therefore, no such exists, and so is not differentially algebraic. Q.E.D. References Gamma and related functions Theorems in analysis
Hölder's theorem
[ "Mathematics" ]
508
[ "Mathematical analysis", "Theorems in mathematical analysis", "Mathematical theorems", "Mathematical problems" ]
4,883,555
https://en.wikipedia.org/wiki/Inverse%20beta%20decay
In nuclear and particle physics, inverse beta decay, commonly abbreviated to IBD, is a nuclear reaction involving an electron antineutrino scattering off a proton, creating a positron and a neutron. This process is commonly used in the detection of electron antineutrinos in neutrino detectors, such as the first detection of antineutrinos in the Cowan–Reines neutrino experiment, or in neutrino experiments such as KamLAND and Borexino. It is an essential process to experiments involving low-energy neutrinos (< 60 MeV) such as those studying neutrino oscillation, reactor neutrinos, sterile neutrinos, and geoneutrinos. Reactions Antineutrino induced Inverse beta decay proceeds as + → + , where an electron antineutrino () interacts with a proton () to produce a positron () and a neutron (). The IBD reaction can only be initiated when the antineutrino possesses at least 1.806 MeV of kinetic energy (called the threshold energy). This threshold energy is due to a difference in mass between the products ( and ) and the reactants ( and ) and also slightly due to a relativistic mass effect on the antineutrino. Most of the antineutrino energy is distributed to the positron due to its small mass relative to the neutron. The positron promptly undergoes matter–antimatter annihilation after creation and yields a flash of light with energy calculated as where 511 keV is the electron and positron rest energy, is the visible energy from the reaction, and is the antineutrino kinetic energy. After the prompt positron annihilation, the neutron undergoes neutron capture on an element in the detector, producing a delayed flash of 2.22 MeV if captured on a proton. The timing of the delayed capture is 200–300 microseconds after IBD initiation ( in the Borexino detector). The timing and spatial coincidence between the prompt positron annihilation and delayed neutron capture provides a clear IBD signature in neutrino detectors, allowing for discrimination from background. The IBD cross section is dependent on antineutrino energy and capturing element, although is generally on the order of 10−44 cm2 (~ attobarns). Neutrino induced Another kind of inverse beta decay is the reaction + → + The Homestake experiment used the reaction to detect solar neutrinos. Electron induced During the formation of neutron stars, or in radioactive isotopes capable of electron capture, neutrons are created by electron capture: + → + . This is similar to the inverse beta reaction in that a proton is changed to a neutron, but is induced by the capture of an electron instead of an antineutrino. See also Kamioka Liquid Scintillator Antineutrino Detector References Radioactivity
Inverse beta decay
[ "Physics", "Chemistry" ]
610
[ "Radioactivity", "Nuclear physics" ]
4,884,363
https://en.wikipedia.org/wiki/Levorphanol
Levorphanol (brand name Levo-Dromoran) is an opioid medication used to treat moderate to severe pain. It is the levorotatory enantiomer of the compound racemorphan. Its dextrorotatory counterpart is dextrorphan. It was first described in Germany in 1946. The drug has been in medical use in the United States since 1953. Pharmacology Levorphanol acts predominantly as an agonist of the μ-opioid receptor (MOR), but is also an agonist of the δ-opioid receptor (DOR), κ-opioid receptor (KOR), and the nociceptin receptor (NOP), as well as an NMDA receptor antagonist and a serotonin-norepinephrine reuptake inhibitor (SNRI). Levorphanol, similarly to certain other opioids, also acts as a glycine receptor antagonist and GABA receptor antagonist at very high concentrations. As per the World Health Organization, levorphanol is a step 3 opioid and is considered eight times more potent than morphine at the MOR (2 mg levorphanol is equivalent to 15 mg morphine). Relative to morphine, levorphanol lacks complete cross-tolerance and possesses greater intrinsic activity at the MOR. The duration of action is generally long compared to other comparable analgesics and varies from 4 hours to as much as 15 hours. For this reason levorphanol is useful in palliation of chronic pain and similar conditions. Levorphanol has an oral to parenteral effectiveness ratio of 2:1, one of the most favorable of the strong narcotics. Its antagonism of the NMDA receptor, similar to those of the phenylheptylamine open-chain opioids such as methadone or the phenylpiperidine ketobemidone, make levorphanol useful for types of pain that other analgesics may not be as effective against, such as neuropathic pain. Levorphanol's exceptionally high analgesic efficacy in the treatment of neuropathic pain is also conferred by its action on serotonin and norepinephrine transporters, similar to the opioids tramadol and tapentadol, and mutually complements the analgesic effect of its NMDA receptor antagonism. Levorphanol shows a high rate of psychotomimetic side effects such as hallucinations and delirium, which have been attributed to its binding to and activation of the KOR. At the same time however, activation of this receptor as well as of the DOR have been determined to contribute to its analgesic effects. Chemistry Chemically, levorphanol belongs to the morphinan class and is (−)-3-hydroxy-N-methyl-morphinan. It is the "left-handed" (levorotatory) stereoisomer of racemorphan, the racemic mixture of the two stereoisomers with differing pharmacology. The "right-handed" (dextrorotatory) enantiomer of racemorphan is dextrorphan (DXO), an antitussive, potent dissociative hallucinogen (NMDA receptor antagonist), and weakly active opioid. DXO is an active metabolite of the pharmaceutical drug dextromethorphan (DXM), which, analogously to DXO, is an enantiomer of the racemic mixture racemethorphan along with levomethorphan, the latter of which has similar properties to those of levorphanol. Society and culture Name Levorphanol is the INN, BAN, and DCF. As the medically used tartrate salt, the drug is also known as levorphanol tartrate (USAN, BANM). The former developmental code name of levorphanol at Roche was Ro 1-5431. Availability As the tartrate salt, levorphanol is marketed by Hikma Pharmaceuticals USA Inc. and Virtus Pharmaceuticals in the U.S., and Canada under the brand name Levo-Dromoran. Legality Levorphanol is listed under the Single Convention On Narcotic Drugs 1961 and is regulated like morphine in most countries. In the U.S., it is a Schedule II Narcotic controlled substance with a DEA ACSCN of 9220 and 2013 annual aggregate manufacturing quota of 4.5 kilograms. The salts in use are the tartrate (free base conversion ratio 0.58) and hydrobromide (0.76). See also Cough syrup Racemorphan; Dextrorphan; Noscapine Codeine; Pholcodine Dextromethorphan; Dimemorfan Butamirate Pentoxyverine Tipepidine Cloperastine; Levocloperastine References Delta-opioid receptor agonists Dissociative drugs Enantiopure drugs Euphoriants GABA receptor antagonists German inventions Glycine receptor antagonists Kappa-opioid receptor agonists Morphinans Mu-opioid receptor agonists NMDA receptor antagonists Nociceptin receptor agonists Hydroxyarenes Serotonin–norepinephrine reuptake inhibitors Synthetic opioids
Levorphanol
[ "Chemistry" ]
1,160
[ "Stereochemistry", "Enantiopure drugs" ]
4,886,627
https://en.wikipedia.org/wiki/Wetted%20perimeter
The wetted perimeter is the perimeter of the cross sectional area that is "wet". The length of line of the intersection of channel wetted surface with a cross sectional plane normal to the flow direction. The term wetted perimeter is common in civil engineering, environmental engineering, hydrology, geomorphology, and heat transfer applications; it is associated with the hydraulic diameter or hydraulic radius. Engineers commonly cite the cross sectional area of a river. The wetted perimeter can be defined mathematically as where li is the length of each surface in contact with the aqueous body. In open channel flow, the wetted perimeter is defined as the surface of the channel bottom and sides in direct contact with the aqueous body. Friction losses typically increase with an increasing wetted perimeter, resulting in a decrease in head. In a practical experiment, one is able to measure the wetted perimeter with a tape measure weighted down to the river bed to get a more accurate measurement. When a channel is much wider than it is deep, the wetted perimeter approximates the channel width. See also Hydrological transport model Manning formula Hydraulic radius References Earth sciences Environmental engineering Environmental science Fluid dynamics Geomorphology Hydraulic engineering Hydrology Length
Wetted perimeter
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
249
[ "Scalar physical quantities", "Hydrology", "Physical quantities", "Distance", "Chemical engineering", "Quantity", "Size", "Physical systems", "Hydraulics", "Civil engineering", "Length", "Environmental engineering", "Piping", "nan", "Wikipedia categories named after physical quantities",...
1,914,405
https://en.wikipedia.org/wiki/Heats%20of%20fusion%20of%20the%20elements%20%28data%20page%29
Heat of fusion Notes Values refer to the enthalpy change between the liquid phase and the most stable solid phase at the melting point (normal, 101.325 kPa). References CRC As quoted from various sources in an online version of: David R. Lide (ed), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Enthalpy of Fusion LNG As quoted from various sources in: J.A. Dean (ed), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 6, Thermodynamic Properties; Table 6.4, Heats of Fusion, Vaporization, and Sublimation and Specific Heat at Various Temperatures of the Elements and Inorganic Compounds WEL As quoted at http://www.webelements.com/ from these sources: G.W.C. Kaye and T.H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993. D.R. Lide, (ed.) in Chemical Rubber Company handbook of chemistry and physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998. A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992. H. Ellis (Ed.) in Nuffield Advanced Science Book of Data, Longman, London, UK, 1972. See also Thermodynamic properties Chemical element data pages
Heats of fusion of the elements (data page)
[ "Physics", "Chemistry", "Mathematics" ]
330
[ "Thermodynamic properties", "Physical quantities", "Chemical data pages", "Quantity", "Chemical element data pages", "Thermodynamics" ]
1,914,406
https://en.wikipedia.org/wiki/Heats%20of%20vaporization%20of%20the%20elements%20%28data%20page%29
Heat of vaporization Notes Values refer to the enthalpy change in the conversion of liquid to gas at the boiling point (normal, 101.325 kPa). References Zhang et al. CRC As quoted from various sources in an online version of: David R. Lide (ed.), CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Boca Raton, Florida, 2003; Section 6, Fluid Properties; Enthalpy of Vaporization GME Kugler HK & Keller C (eds) 1985, Gmelin handbook of inorganic and organometallic chemistry, 8th ed., 'At, Astatine', system no. 8a, Springer-Verlag, Berlin, , pp. 116–117 LNG As quoted from various sources in: J.A. Dean (ed.), Lange's Handbook of Chemistry (15th Edition), McGraw-Hill, 1999; Section 6, Thermodynamic Properties; Table 6.4, Heats of Fusion, Vaporization, and Sublimation and Specific Heat at Various Temperatures of the Elements and Inorganic Compounds WEL As quoted at http://www.webelements.com/ from these sources: G.W.C. Kaye and T. H. Laby in Tables of physical and chemical constants, Longman, London, UK, 15th edition, 1993. D.R. Lide, (ed.) in Chemical Rubber Company handbook of chemistry and physics, CRC Press, Boca Raton, Florida, USA, 79th edition, 1998. A.M. James and M.P. Lord in Macmillan's Chemical and Physical Data, Macmillan, London, UK, 1992. H. Ellis (ed.) in Nuffield Advanced Science Book of Data, Longman, London, UK, 1972. See also Thermodynamic properties Chemical element data pages
Heats of vaporization of the elements (data page)
[ "Physics", "Chemistry", "Mathematics" ]
390
[ "Thermodynamic properties", "Physical quantities", "Chemical data pages", "Quantity", "Chemical element data pages", "Thermodynamics" ]
1,916,509
https://en.wikipedia.org/wiki/Mean%20free%20time
Molecules in a fluid constantly collide with each other. The mean free time for a molecule in a fluid is the average time between collisions. The mean free path of the molecule is the product of the average speed and the mean free time. These concepts are used in the kinetic theory of gases to compute transport coefficients such as the viscosity. In a gas the mean free path may be much larger than the average distance between molecules. In a liquid these two lengths may be very similar. Scattering is a random process. It is often modeled as a Poisson process, in which the probability of a collision in a small time interval is . For a Poisson process like this, the average time since the last collision, the average time until the next collision and the average time between collisions are all equal to . References Statistical mechanics
Mean free time
[ "Physics" ]
167
[ "Statistical mechanics stubs", "Statistical mechanics" ]
1,916,689
https://en.wikipedia.org/wiki/Chemical%20accident
A chemical accident is the unintentional release of one or more hazardous chemicals, which could harm human health and the environment. Such events include fires, explosions, and release of toxic materials that may cause people illness, injury, or disability. Chemical accidents can be caused for example by natural disasters, human error, or deliberate acts for personal gain. Chemical accidents are generally understood to be industrial-scale ones, often with important offsite consequences. Unintended exposure to chemicals that occur at smaller work sites, as well as in private premises during everyday activities are usually not referred to as chemical accidents. Process safety is the engineering discipline dealing with chemical accident hazards understanding and management. Process safety's scope extends however to fires and explosions from hazardous materials generally not referred to as 'chemicals', such as refined and unrefined hydrocarbon mixtures. Frequency Chemical accidents are relatively common in the United States, with a significant accident occurring on average multiple times per week. Most chemical accidents never make national headline news. American chemical industry public relations professionals claim that such accidents are becoming less frequent but the U.S. Environmental Protection Agency states that they are increasing in frequency, with higher average annual rates of population evacuations and of people needing medical treatment resulting from chemical accidents. Texas is the leading U.S. state in chemical accidents. Examples The most dangerous chemical accident recorded in history was the 1984 Bhopal gas tragedy in India, in which more than 3,000 people died after highly toxic methyl isocyanate was released at a Union Carbide pesticides factory. The release happened after the storage tank safety valve had failed to contain the excess pressure created by the exothermic reaction between water and methyl isocyanate. The accident was caused by a faulty valve that let the water into the tank. The safety refrigeration unit for the tank also was not functional since it did not have any coolant. The 2020 Beirut explosion was one of the biggest non-nuclear explosions in history. It happened when approximately 2,750 tons of ammonium nitrate inside a warehouse at the port exploded. Regulation and government agencies European Union In the European Union, incidents such as the Flixborough disaster and the Seveso disaster led to legislation such as the Seveso Directive, which mandates safety reports to be prepared by process and storage plants and issued to local and regional authorities. United States In the United States, concern about chemical accidents after the Bhopal disaster led to the passage of the 1986 Emergency Planning and Community Right-to-Know Act. The EPCRA requires local emergency planning efforts throughout the country, including emergency notifications. The law also requires companies to make publicly available information about their storage of toxic chemicals. Based on such information, citizens can identify the vulnerable zones in which severe toxic releases could cause harm or even in some cases death. In 1990 the Chemical Safety and Hazard Investigation Board (CSB) was established by the Congress, though it did not become operational until 1998. The Board's mission is to determine the root causes of chemical accidents and issue safety recommendations to prevent future chemical accidents. Note that the CSB does not issue fines or citations since the Congress designed the agency to be non-regulatory. It also organizes workshops on a number of issues related to preparing for, preventing, and responding to chemical accidents. See also Chemical safety Process safety Process safety management References External links 24-7 Response Brandweerinformatiecentrum voor gevaarlijke stoffen/Fire services information centre for dangerous goods (in Dutch) OECD Programme on Chemical Accidents: Environment Directorate Guiding Principles for Chemical Accident Prevention, Preparedness and Response Preventing Chemical Accidents – How to Prevent Chemicals From Contaminating Your Workplace – Safety Storage Systems U.S. Chemical Safety Board Process safety Chemical hazards Industrial fires and explosions
Chemical accident
[ "Chemistry", "Engineering", "Environmental_science" ]
776
[ "Industrial fires and explosions", "Chemical accident", "Safety engineering", "Environmental chemistry", "Chemical hazards", "Process safety", "Explosions", "Chemical process engineering" ]
1,917,294
https://en.wikipedia.org/wiki/Safety%20wire
A safety wire or locking-wire is a type of positive locking device that prevents fasteners from falling out due to vibration and other forces. The presence of safety wiring may also serve to indicate that the fasteners have been properly tightened. Safety wire is available in a variety of gauges and materials, depending on the application. In aircraft and racing applications, stainless steel wire is used, such as in diameter. Typically, the wire is threaded through a hole drilled into a fastener or part, then twisted and anchored to a second fastener or part, then twisted again. Application Principle There are a few techniques for different applications. The word safetying is universally used in the aircraft industry. Briefly, safetying is defined as: "Securing by various means any nut, bolt, turnbuckle etc., on the aircraft so that vibration will not cause it to loosen during operation." These practices are not a means of obtaining or maintaining torque, rather a safety device to prevent the disengagement of screws, nuts, bolts, snap rings, oil caps, drain cocks, valves, and parts. The wire itself maintains tension and remains in place by being twisted around itself and attached to the fastener to be secured on one end and an anchor point (which could be another fastener) on the other end. Since safety wire is made of a malleable alloy, it retains its shape after being bent, rather than springing back to its original shape. This property allows it to remain locked around an object, such as when it is passed through a small hole on a fastener, looped back upon itself and then twisted. The same process is then repeated around the anchor point, which could be another fastener. Since it remains twisted instead of unraveling, it acts as a fixed loop and will not back out without considerable force (greater than the stresses which it is intended to counter) being applied. Mousing (pronounced ) is the application of a molly or safety wire, called mousing wire in this use, to secure a threaded clevis pin to a shackle. This is done by passing a couple of turns of mousing wire through the reach-hole provided for this purpose in the unthreaded end of the clevis pin and around the body of the shackle's hoop. Alternatively, some threaded shackles are provided with a hole through the threaded end of the pin beyond where it emerges from the threaded hole. A cotter pin or a couple of loops of mousing wire through this hole serves the same purpose and secures the shackle in a closed position. Nylon zip ties are also commonly used in applications where the shackle must be secured but easy removal is required. Use A safety wire is used to ensure proper security for a fastener. The wire needed is long enough to reach from a fixed location to a hole in the removable fastener, such as a pin — a clevis fastener, sometimes a linchpin or hitch-pin through a clevis yoke for instance — and the wire pulled back upon itself, parallel to its other end, then twisted, a single end inserted through a fastener, and twisted again, possibly then anchored to a second fastener or other part, then twisted once again, having excess slack pulled relatively taut to be secure. The two ends of the wire-loop thus formed are joined by twisting them together with a tool, using enough twists to be secure, then released from the twisting tool. The removable fastener — possibly a nut, wing nut, turnbuckle, a bolt or a pin similar to a bolt — having a hole through a part of it that will remain accessible when it is fastened in place will be secured with the wire passing through it. When finished, any excess length of wire would be cut off with a pair of wire cutters, such as pliers that may, also, be the twisting tool. If the fastener part to be secured does not come with a hole for the safety wire, one may need to be drilled. Safety wire is not reusable, thus it can be cut apart in order to remove it easily when the fastener is to be opened. Proper techniques When using a most common gauge of safety wire, which is , guidance for installation can be found in several publicly available sources. FAA AC 43.13-1B, MS33540, and FAA AC 43.13-1B identifies only 6 to 8 twists per inch (about 6 to 4 mm pitch). 43.13-1B has no other reference to twists per inch either by hand twisting with special tools. Aviation Mechanic Handbook identifies different twists per inch as at 8–14 twists per inch (~3-2 mm pitch), at 6–11 twists per inch (~4 to 2 mm pitch), and at 4–8 twists per inch (~6-3 mm pitch). The safety wire should be threaded through the object fastener such that it creates tension in the opposite direction of the fastener's removal. For example, if a standard automotive bolt in the U.S. is being secured, then the safety wire when installed should put tension on the bolt in a clockwise direction, since that is the direction that the bolt turns to tighten. When drilling a fastener, the choice of where to drill it depends on the type of fastener and to what it will be wired. The alternative to drilling holes in fasteners is to use safety wire tabs (see Safety wire tabs section below), or to purchase pre-drilled fasteners. AC43.13-1B, par. 7-124 f. page 7-21 specifies "Safety wire ends must be bent under and inward toward the part to avoid sharp or projecting ends, which might present a safety hazard." The first picture in this article does not conform to proper techniques and practices and ought to be replaced with a better example. Witness wire A more simplistic application of safety wire, more commonly referred to as witness wiring, is the use of light gauge, single strand, copper wire to provide positive visual confirmation of the security or closure of specific equipment within the aerospace industry. Common applications include the security of safety equipment such as fire extinguishers, and safety equipment bags, but also as an assurance that critical system switch covers remain in place, such as those associated with the application of fire suppression, or ejection systems. This application of witness wires is widely varied, and may cover a broad range of types of equipment and numerous situations. Witness wires also serve the purpose of providing a rapid method for ensuring critical safety equipment or systems have not been used or tampered with since their last repair, reset, or inspection, and also that the container of such equipment has not been inadvertently opened, disturbed or tampered with, therefore providing confidence in their readiness for use. In a similar manner, critical system switch covers are protected from inadvertent activation, through the application of witness wire. The gauge of copper wire utilized in this application is such that the wire security can be overcome with minimal breaking force by hand, without damage to the equipment or persons, and once broken, remains affixed to the equipment without the introduction of foreign object damage (FOD). Typically the wire is threaded through existing holes in the associated equipment, using a single strand loop, and a single crossover, such that the closure is secured without impedance to the normal functioning of the equipment. The single crossover provides the appropriate friction such that the wire cannot fall completely free of the equipment when broken. The loose ends of the strand may be twisted in a pigtail fashion, or crimped with a lead seal, securing both strands as close to the closure as practical. In each case, the loose ends of the strands should be tucked neatly away from inadvertent impact. Equipment Safety wire Safety wire is commonly in diameter, but diameters are also available. It is usually made of stainless steel, but is also available in monel and inconel alloys for high temperature applications and copper for break-away applications. For consumer applications, it is typically sold in spools enclosed in a small cardboard or plastic canister. Safety wire twisters A safety wire twister is a simple tool that allows the user to grip the two loose ends of a piece of safety wire and then, while holding the main barrel of the tool, turn the end that is not gripping the wire (which is bent to create a simple cranking mechanism) in order to twist the safety wire. There is another type of basic safety wire twister which is similar to a standard screwdriver, except that the tip has a small grasping mechanism to hold the ends of the wire while the technician turns the handle to twist the wire. The advantage to this tool is its long and thin design, which can access hard-to-reach areas where one's hands or pliers do not fit. It is commonly referred to as a "pignose" due to its snout-like appearance. Safety wire tabs Safety wire tabs are washers that are used to secure fasteners by transferring the force of the safety wire to the head of the fastener to be secured. They are installed just like any other washer, after which the sides of the tab are bent up to make contact with the sides of the head of the fastener. One side of the tab is longer than the other with a small hole at the top, through which safety wire is threaded. Once the safety wire is properly installed, the sides of the tab transfer the force of the safety wire to the fastener, as though the fastener itself had been drilled and had the safety wire run through it. The advantage of safety wire tabs is that the fastener to be secured does not need to be drilled, which can be advantageous for fasteners that should not or cannot be drilled because of size or damage concerns. They can also be useful when a fastener needs to be replaced, the replacement is not already drilled, and circumstances do not afford the time or tools to properly prepare the replacement fastener. The disadvantages are that it adds extra distance between the head of the fastener and the surface to which it is to be secured, and it is not as secure as securing the wire directly to the fastener itself as the tab could be a point of failure if it somehow unbends or the hole breaks (which is more likely than the hole in a drilled fastener failing due to the thinness and malleability of the material from which it is made). Pre-drilled fasteners For certain applications where safety wiring is common, fasteners come pre-drilled with holes to accept safety wire. When wiring something that did not come with pre-drilled fasteners stock, however, the more cost-effective way (as opposed to replacing all stock fasteners with pre-drilled ones of the same type) is often to drill the stock fasteners. Drilling jigs Because the use of twisted safety wire to secure fasteners requires the fasteners to be drilled, tool makers offer drill jigs to help technicians drill the fasteners to be secured. Although pre-drilled fasteners can be obtained, most fasteners to be secured start out never having been intended to be secured (e.g., a production motorcycle which was built for the street but which has been converted into a race-bike). Such fasteners need to be drilled. Drilling them is often difficult as, due to their small size and irregular shape, securing them properly and applying a drill effectively can be trying. As a result, technicians often break drill bits or damage the fastener when the bit slides off position. A useful tool is a drill press, because it allows the technician to apply the force of the drill bit directly to the fastener being drilled and eliminates lateral movement; but even with a press the fastener needs to be secured to prevent it from sliding out from beneath the bit. Even though drill presses ease the process, a press isn't always available, such as at a race event; and even with a press, the problem of securing the fastener still exists. To solve those problems, jigs are available which are designed to securely hold the fastener while providing a guide-channel for a drill bit (with either a hand drill or a press) so that the technician can easily and directly apply force from the drill to the fastener without having it slip off or breaking the bit. Proprietary methods Safe-T-Cable is an alternative to safety wire. Safe-T-Cable is defined as a group of strands right-hand helically twisted without a core. This eliminates the need for twisting during installation, as is required with safety wire. Several companies manufacture safety cable, and it is becoming an industry standard due to easier control of critical inputs and reduced installation time. Installation and quality requirements of Safe-T-Cable are governed by SAE AS4536. The system works by providing pre-cut lengths of safety wire that have a large cap on one end. The cable is threaded through a hole on the fastener to be secured, which is large enough to accommodate the wire but too small for the cap on the other end of the cable to pass through it. After the other end of the wire is passed through the anchor point, the technician takes an extra ferrule and the special tool that is available from Daniels and crimps the ferrule onto that end of the wire. In the same motion, the ferrule is crimped and the remaining cable is cut. Again, the end cap is too large to pass through the hole that the wire passed through, and thus the cable is secured. In addition to installing 3 times quicker than safety wire, Safe-T-Cable also produces a stronger and safer result. The user-friendly tooling guarantees a consistent installation each time it is used. Unlike safety wire, Safe-T-Cable installations have no sharp edges, which greatly reduces the risk of injury. Safe-T-Cable also reduces repetitive motion injuries and FOD. The installation and inspection of Safe-T-Cable is simple, easy to train, and rework is virtually eliminated. Operation overview: Thread: A cable assembly is threaded through the fasteners in a direction which will exert a positive or neutral pull when tension is applied Insert: The ferrule is threaded on the cable and the cable is inserted through the nose of the tool Tension: Correct tension is applied with the tool Crimp and cut: The ferrule is crimped and the cable is cut flush with the end of the ferrule. Discard excess cable and the job is complete. Advantages and disadvantages Although many systems purport to be more efficient than installing traditional safety wire, an advantageous by-product of the twisting method of installing safety wire is that it leaves a highly visible and easily inspectable indication that the fasteners in question are in fact properly secured. In addition, safety wire twisting is a standard, non-proprietary technique, and tools and materials can be easily found, cheaply purchased, and mixed with other brands while still working properly (provided of course that all components are used properly and with the proper types of complementary components and tools, if not brands). The primary disadvantage of traditional safety wire is the time it requires to install properly when securing fasteners, although technicians who use it often can implement it fairly quickly. It also leaves behind waste products when ends are clipped off or when it is cut off secured fasteners that need to be removed during maintenance, resulting in sharp metal bits that can easily damage soft materials or injure skin. However, the amount of waste product is relatively small, it is non-toxic, and the hazard can be mitigated altogether if technicians properly dispose of any waste product. When clipping off ends, the ends can go flying off which makes their recovery difficult and can cause injury to anyone in the immediate vicinity, such as the technician or an assistant; however, this can also be easily mitigated by using extra care or by using safety wire pliers that have a special insert that is designed to catch clipped off ends. Another disadvantage is that since the manual skill required to implement traditional safety wire is easily learned, the techniques required to maximize the retentive force of safety wire (e.g., in which direction the retentive force should be exerted, the direction of twist, proper angles for securing multiple fasteners, proper twists per inch, or alternatively pitch in millimeters, which type of wire to use, etc.) are often ignored by non-formally trained technicians (e.g., hobbyists) who use safety wire for their projects. Wire can prevent nuts from falling off, but can not prevent fatigue failure which results when low tensioned bolts are exposed to vibration. Alternatives There are also other systems of fastener retention that do not rely on safety wire at all, such as lock washers, locknuts, jam nuts, thread-locking fluid, castellated nuts and cotter pins, all of which accomplish a similar objective as safety wire, which is to prevent nuts backing off (falling off). Locknuts such as Nyloc nuts and HARDLOCK nuts have the additional objective of preventing lost bolt tension. Loss of bolt tension can cause fatigue failure in the joint. References Further reading Aviation Publication (AvP) 970 dated 1959:Design Requirements for Service Aircraft Air Publication (AP) 970 2nd Edition dated 1924:Handbook of Strength Calculations Handbook (HB) 806 1st Edition dated 1918:Handbook of Strength Calculations The Society of British Aerospace Companies Ltd Reference Sheet 697 – The Wire Locking of Threaded Items External links The Use of Lock Wire – A Guide Safety equipment Fasteners
Safety wire
[ "Engineering" ]
3,648
[ "Construction", "Fasteners" ]
1,918,370
https://en.wikipedia.org/wiki/Bamford%E2%80%93Stevens%20reaction
The Bamford–Stevens reaction is a chemical reaction whereby treatment of tosylhydrazones with strong base gives alkenes. It is named for the British chemist William Randall Bamford and the Scottish chemist Thomas Stevens Stevens (1900–2000). The usage of aprotic solvents gives predominantly Z-alkenes, while protic solvent gives a mixture of E- and Z-alkenes. As an alkene-generating transformation, the Bamford–Stevens reaction has broad utility in synthetic methodology and complex molecule synthesis. The treatment of tosylhydrazones with alkyl lithium reagents is called the Shapiro reaction. Reaction mechanism The first step of the Bamford–Stevens reaction is the formation of the diazo compound 3. In protic solvents, the diazo compound 3 decomposes to the carbenium ion 5. In aprotic solvents, the diazo compound 3 decomposes to the carbene 7. Directed Bamford-Stevens reaction The Bamford–Stevens reaction has not proved useful for the stereoselective generation of alkenes via thermal decomposition of metallated tosylhydrazones due to the indiscriminate 1,2-rearrangement of the carbene center, which gives a mixture of products. By replacing an alkyl group with a trimethylsilyl (TMS) group on N-aziridinylimines, migration of a specific hydrogen atom can be enhanced. With the silicon atom beta to H, a σC-Si → σ*C-H stereoelectronic effect weakens the C-H bond, resulting in its exclusive migration and leading to the nearly exclusive formation of allylsilanes instead of equal amounts of allylsilanes and isomeric homoallylsilanes, analogous to the mixture of products seen in the dialkyl case, or other insertion products (i.e. cyclopropanes). See beta-silicon effect. Synthesis of 3-substituted indazoles from arynes and N-tosylhydrazones N-tosylhydrazones can be used in a variety of synthetic procedures. Their use with arynes has been used to prepare 3-substituted indazoles via two proposed pathways. The first step is the deprotonation of the hydrazone of diazo compounds using CsF. At this point, the conjugate base could either decompose to give the diazo compound and undergo a [3+2] dipolar cycloaddition with the aryne to give the product, or a [3+2] annulation with aryne which would also give the final product. While strong bases, such as LiOtBu and Cs2CO3 are often used in this chemistry, CsF was used to facilitate the in situ generation of arynes from o-(trimethylsilyl)aryl triflates. CsF was also thought to be sufficiently basic to deprotonate the N-tosylhydrazone. N-tosylhydrazones as reagents for cross-coupling reactions Barluenga and coworkers developed the first example of using N-tosylhydrazones as nucleophilic partners in cross-coupling reactions. Typically, nucleophilic reagents in coupling reactions tend to be of the organometallic variety, namely organomagnesium, -zinc, -tin, -silicon, and –boron. Combined with electrophilic aryl halides, N-tosylhydrazones can be used to prepare polysubstituted olefins under Pd-catalyzed conditions without the use of often expensive, and synthetically demanding organometallic reagents. The scope of the reaction is wide; N-tosylhydrazones derived from aldehydes and ketones are well tolerated, which leads to both di- and trisubstituted olefins. Moreover, and variety of aryl halides are well tolerated as coupling partners including those bearing both electron-withdrawing and electron-donating groups, as well as π-rich and π-deficient aromatic heterocyclic compounds. Stereochemistry is an important element to consider when preparing polysubstituted olefins. Using hydrazones derived from linear aldehydes resulted in exclusively trans olefins, while the stereochemical outcomes of trisubstituted olefins were dependent on the size of the substituents. The mechanism of this transformation is thought to proceed in a manner similar to the synthesis of alkenes through the Bamford–Stevens reaction; the decomposition of N-tosylhydrazones in the presence of base to generate diazocompounds which then release nitrogen gas, yielding a carbene, which then can be quenched with an electrophile. In this case, the coupling reaction starts with the oxidative addition of the aryl halide to Pd0 catalyst to give the aryl PdII complex. The reaction of the diazocompound, generated from the hydrazone, with the PdII complex produces a Pd-carbene complex. A migratory insertion of the aryl group gives an alkyl Pd complex, which undergoes syn beta-hydride elimination to generate the trans aryl olefin and regenerate the Pd0 catalyst. This reaction has also seen utility in preparing conjugated enynes from N-tosylhydrazones and terminal alkynes under similar Pd-catalyzed reaction conditions and following the same mechanism. Moreover, Barluenga and coworkers demonstrated a one-pot three-component coupling reaction of aldehydes or ketones, tosylhydrazides, and aryl halides in which the N-tosylhydrazone is formed in situ. This process produces stereoselective olefins in similar yields compared to the process in which preformed N-tosylhydrazones are used. Barluenga and coworkers also developed metal-free reductive coupling methodology of N-tosylhydrazones with boronic acids. The reaction tolerates a variety of functional groups on both substrates, including aromatic, heteroaromatic, aliphatic, electron-donating and electron-withdrawing substituents, and proceeds with high yields in the presence of potassium carbonate. The reaction is thought to proceed through the formation of a diazo compound that is generated from a hydrazone salt. The diazo compound could then react with the boronic acid to produce the benzylboronic acid through a boronate intermediate. An alternate pathway consists of the formation of the benzylboronic acid via a zwitterionic intermediate, followed by protodeboronation of the benzylboronic acid under basic conditions, which results in the final reductive product. This methodology has also been extended to heteroatom nucleophiles to produce ethers and thioethers. A tandem rhodium-catalyzed Bamford-Stevens/thermal aliphatic Claisen rearrangement A novel process was developed by Stoltz in which the Bamford–Stevens reaction was combined with the Claisen rearrangement to produce a variety of olefin products. This transformation proceeds first by the thermal decomposition of N-aziridinylhydrazones to form the diazo compound (1), followed by a rhodium-mediated de-diazotization (2) and the syn 1,2-hydride shift (3). This substrate undergoes a thermal aliphatic Claisen rearrangement (4) to yield the product. Application to total synthesis Trost et al. utilized the Bamford–Stevens reaction in their total synthesis of (–)-isoclavukerin to introduce a diene moiety found in the natural product. A bicyclic trisylhydrazone was initially subjected to Shapiro reaction conditions (alkyllithiums or LDA), which only led to uncharacterizable decomposition products. When this bicyclic trisylhydrazone was subjected to strong base (KH) and heat, however, the desired diene product was generated. Moreover, it was shown that olefin generation and the following decarboxylation could be performed in one pot. To that end, excess NaI was added, along with an elevation in temperature to facilitate the Krapcho decarboxylation. References See also Shapiro reaction Olefination reactions Elimination reactions Name reactions
Bamford–Stevens reaction
[ "Chemistry" ]
1,801
[ "Name reactions", "Olefination reactions", "Organic reactions" ]
25,347,757
https://en.wikipedia.org/wiki/Dexter%20and%20sinister
Dexter and sinister are terms used in heraldry to refer to specific locations in an escutcheon bearing a coat of arms, and to the other elements of an achievement. Dexter (Latin for 'right') indicates the right-hand side of the shield, as regarded by the bearer, i.e. the bearer's proper right, and to the left as seen by the viewer. Sinister (Latin for 'left') indicates the left-hand side as regarded by the bearer – the bearer's proper left, and to the right as seen by the viewer. In vexillology, the equivalent terms are hoist and fly. Significance The dexter side is considered the side of greater honour, for example when impaling two arms. Thus, by tradition, a husband's arms occupy the dexter half of his shield, his wife's paternal arms the sinister half. The shield of a bishop shows the arms of his see in the dexter half, his personal arms in the sinister half. King Richard II adopted arms showing the attributed arms of Edward the Confessor in the dexter half and the royal arms of England in the sinister. More generally, by ancient tradition, the guest of greatest honour at a banquet sits at the right hand of the host. The Bible is replete with passages referring to being at the "right hand" of God. Sinister is used to indicate that an ordinary or other charge is turned to the heraldic left of the shield. A bend sinister is a bend (diagonal band) which runs from the bearer's top left to bottom right, as opposed to top right to bottom left. As the shield would have been carried with the design facing outwards from the bearer, the bend sinister would slant in the same direction as a sash worn diagonally on the left shoulder. A bend (without qualification, implying a bend dexter, though the full term is never used) is a bend which runs from the bearer's top right to bottom left. In the same way, the terms per bend and per bend sinister are used to describe a heraldic shield divided by a line like a bend or bend sinister, respectively. This division is key to dimidiation, a method of joining two coats of arms by placing the dexter half of one coat of arms alongside the sinister half of the other. In the case of marriage, the dexter half of the husband's arms would be placed alongside the sinister half of the wife's. The practice fell out of use as early as the 14th century and was replaced by impalement. In some cases, it could render the arms that are cut in half unrecognizable and in some cases, it would result in a shield that looked like one coat of arms rather than a combination of two. The Great Seal of the United States features an eagle clutching an olive branch in its dexter talon and arrows in its sinister talon, indicating the nation's intended inclination to peace. In 1945, one of the changes ordered for the similarly arranged flag of the president of the United States by President Harry S. Truman was having the eagle face towards its right (dexter, the direction of honour) and thus towards the olive branch. Origin The sides of a shield were originally named for the purpose of military training of knights and soldiers long before heraldry came into use early in the 13th century so the only viewpoint that was relevant was the bearer's. The front of the purely functional shield was originally undecorated. It is likely that the use of the shield as a defensive and offensive weapon was almost as developed as that of the sword itself and so the various positions or strokes of the shield needed to be described to students of arms. Such usage may indeed have descended directly from Roman training techniques that were spread throughout Roman Europe and then continued during the age of chivalry when heraldry came into use. References Heraldry Orientation (geometry)
Dexter and sinister
[ "Physics", "Mathematics" ]
789
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
25,350,901
https://en.wikipedia.org/wiki/Cis%20%28mathematics%29
is a mathematical notation defined by , where is the cosine function, is the imaginary unit and is the sine function. is the argument of the complex number (angle between line to point and x-axis in polar form). The notation is less commonly used in mathematics than Euler's formula, which offers an even shorter notation for but cis(x) is widely used as a name for this function in software libraries. Overview The notation is a shorthand for the combination of functions on the right-hand side of Euler's formula: where . So, i.e. "" is an acronym for "". It connects trigonometric functions with exponential functions in the complex plane via Euler's formula. While the domain of definition is usually , complex values are possible as well: so the function can be used to extend Euler's formula to a more general complex version. The function is mostly used as a convenient shorthand notation to simplify some expressions, for example in conjunction with Fourier and Hartley transforms, or when exponential functions shouldn't be used for some reason in math education. In information technology, the function sees dedicated support in various high-performance math libraries (such as Intel's Math Kernel Library (MKL) or MathCW), available for many compilers and programming languages (including C, C++, Common Lisp, D, Haskell, Julia, and Rust). Depending on the platform, the fused operation is about twice as fast as calling the sine and cosine functions individually. Mathematical identities Derivative Integral Other properties These follow directly from Euler's formula. The identities above hold if and are any complex numbers. If and are real, then History The notation was first coined by William Edwin Hamilton in Elements of Quaternions (1866) and subsequently used by Irving Stringham (who also called it "sector of ") in works such as Uniplanar Algebra (1893), James Harkness and Frank Morley in their Introduction to the Theory of Analytic Functions (1898), or by George Ashley Campbell (who also referred to it as "cisoidal oscillation") in his works on transmission lines (1901) and Fourier integrals (1928). In 1942, inspired by the notation, Ralph V. L. Hartley introduced the (for cosine-and-sine) function for the real-valued Hartley kernel, a meanwhile established shortcut in conjunction with Hartley transforms: Motivation The notation is sometimes used to emphasize one method of viewing and dealing with a problem over another. The mathematics of trigonometry and exponentials are related but not exactly the same; exponential notation emphasizes the whole, whereas and notations emphasize the parts. This can be rhetorically useful to mathematicians and engineers when discussing this function, and further serve as a mnemonic (for ). The notation is convenient for math students whose knowledge of trigonometry and complex numbers permit this notation, but whose conceptual understanding does not yet permit the notation . The usual proof that requires calculus, which the student may not have studied before encountering the expression . This notation was more common when typewriters were used to convey mathematical expressions. See also De Moivre's formula Euler's formula Complex number Ptolemy's theorem Phasor Versor Notes References Trigonometry Mathematical identities
Cis (mathematics)
[ "Mathematics" ]
689
[ "Algebra", "Mathematical theorems", "Mathematical problems", "Mathematical identities" ]
25,351,068
https://en.wikipedia.org/wiki/Nuclear%20C%2A-algebra
In the mathematical field of functional analysis, a nuclear C*-algebra is a C*-algebra such that for every C*-algebra the injective and projective C*-cross norms coincides on the algebraic tensor product and the completion of with respect to this norm is a C*-algebra. This property was first studied by under the name "Property T", which is not related to Kazhdan's property T. Characterizations Nuclearity admits the following equivalent characterizations: The identity map, as a completely positive map, approximately factors through matrix algebras. By this equivalence, nuclearity can be considered a noncommutative analogue of the existence of partitions of unity. The enveloping von Neumann algebra is injective. It is amenable as a Banach algebra. (For separable algebras) It is isomorphic to a C*-subalgebra of the Cuntz algebra with the property that there exists a conditional expectation from to . Examples The commutative unital C* algebra of (real or complex-valued) continuous functions on a compact Hausdorff space as well as the noncommutative unital algebra of real or complex matrices are nuclear. See also References C*-algebras Functional analysis Operator theory it:C*-algebra#C*-algebra nucleare
Nuclear C*-algebra
[ "Mathematics" ]
275
[ "Functional analysis", "Mathematical objects", "Functions and mappings", "Mathematical relations" ]
25,351,840
https://en.wikipedia.org/wiki/Fine%20structure%20genetics
Fine structure genetics encompasses a set of tools used to examine not just the mutations within an entire genome, but can be isolated to either specific pathways or regions of the genome. Ultimately, this more focused lens can lead to a more nuanced and interactive view of the function of a gene. Regional Mutagenesis Similar to forward genetics, regional mutagenesis seeks to saturate with insertions or point mutations, but instead of for the entire genome, it saturates only a small portion of the genome. By limiting the region in focus, researchers are then able to intensify the number of mutations within any genes or promoters within that regions, often illuminating more complicated functions than could be identified with a broader focus. Furthermore, such mutations can show how the specific structure of that region of a chromosome affect expression levels and function. Such mutations are introduced in the same means as forward genetics, often through chemical induction or transposable element insertions. The creation of specific balancer chromosomes that are restrictive to only a small region of the genome can guarantee that mutations will only be isolated and reproduced only in that region. Modifier Screens When a gene is identified as affecting a specific phenotype, a modifier screen can be used to assess which genes that either enhance or inhibit the phenotypic expression of the initial mutation. This is a powerful way of rapidly identifying many genes that are involved in the expression of a phenotype, but such screens can only say whether or not two genes interact, not what their exact function are, or how they relate. For instance, the product of the second gene may interact directly with that of the first gene, or it may be involved in distantly on the pathway. One of the major benefits of modifier screens is that screens do not necessarily have to take place in the organism of interest. For instance, a gene that corresponds to an important phenotype in an organism in which a set of screens involving mutagenesis (i.e. human beings), will often have a homologue in a model organism. In this case, that homologous gene can either be knocked out or the initial gene can be ectopically expressed in the model organism, at which point a screen for modifiers of the aberrant phenotype can take place. Enhancer trapping Enhancer trapping involves the insertion of a reporter gene, such as lac-Z or GFP, into the promoter region a desired gene, so that whenever the gene is expressed, it can be monitored by said reporter, giving a specific spatial and temporal map of when a gene expressed. This method again involves Transposable Element insertion, taking advantage of certain transposable elements that have a propensity to insert into promoter regions. This method is also advantageous as such insertions can be reversed. A similar method can be used to study novel phenotypes created by tissue specific gain-of-function or loss of function. In order to create gain-of-function, the TE is inserted with not just with a reporter gene, but also with the GAL4 transcriptional activator. When this line is crossed with an organism with a gene fused with a GAL4 mediated promoter. This way anytime that particular promoter is turned on, it will not only express its original gene, it will also turn on expression of any gene the experimenter would like turned on. This is an easy way to ensure tissue or time specific expression of a gene where it is not usually expressed. Under a similar principle, the GAL4 transcriptional activator can be replaced with an RNAi construct for a specific gene. This can make any promoter into an inhibitor of a gene in a specific location. Floxing For a fuller explanation, see Cre-Lox Recombination With a similar effect as the insertion of TE with RNAi constructs, Cre-Lox recombinants can be used to have tissue specific loss-of-function. It is particularly useful in dissecting the specific functions of genes that are essential in development, and therefore knock-outs are lethal. References A Primer of Genome Science, Third Edition. Greg Gibson and Spencer V. Muse. 2009. Sinauer Press Molecular genetics Genomics
Fine structure genetics
[ "Chemistry", "Biology" ]
863
[ "Molecular genetics", "Molecular biology" ]
25,352,315
https://en.wikipedia.org/wiki/Canadian%20Capacity%20Guide%20For%20Signalized%20Intersections
The Canadian Capacity Guide for Signalized Intersections (CCG) is a publication of the Canadian Institute of Transportation Engineers (CITE). It provides a methodology that allows Traffic Engineers to plan, design, and evaluate traffic signal controlled roadway intersections. The CCG has been based on the current experience of practicing traffic engineers, transportation educators and students across Canada, and a considerable body of Canadian and international research. But while developed in Canada, its methodology is applicable to conditions anywhere. The survey procedures included in the CCG provide direction for users in any country to collect local data which can be used to obtain geographically specific results. Objectives Many cities and metropolitan areas experience traffic congestion on some portions of their transportation networks. These municipalities also suffer from constrained urban space and limited financial resources, but they share the desire to improve the quality of their environment. The analytical tools to understand specific problems require refined methods for the evaluation of alternative solutions. Techniques included in the CCG allow Traffic Engineers to analyze various situations and intersection configurations. This Guide emphasizes the importance of a clear definition of the objectives of signal operation at a specific location. It also provides an understanding of the role that the intersection plays in the travel patterns, public transportation, and both motorized and non-motorized modes of transportation. The focus of the CCG is on the movement of traffic flow units, such as cars, trucks, transit vehicles, cyclists, and pedestrians at signalized intersections. The main parameter is the time dimension that determines how efficiently the available roadway space is used by conflicting traffic streams. The allocation of time to the movement of vehicular and pedestrian traffic in lanes and crosswalks influences not only intersection capacity, but also a number of other measures that describe the quality of service provided for the users. To this end, and to provide input to investigations of possible impacts, the Guide provides both analytical and evaluation methods, and a set of up-to-date numerical parameters for Canadian conditions. Using the Guide, it is possible to assess a variety of solutions by application of a set of practical evaluation criteria. The evaluation criteria, or measures of effectiveness, provide the user with a comprehensive account of intersection operation. Two of the key measures of effectiveness are total person delay and delay to pedestrians. These criteria are essential as the prerequisites for an equitable treatment of all modes of transportation, especially public transit. Other performance measures relate the Guide to environmental, economic and safety analyses, and serve as vital information for transportation demand modelling. Delay and the ratio of volume to capacity are two key parameters widely used in the profession to assess the performance of an intersection. The Guide focuses on the ratio of volume to capacity as a rational measure of how well the intersection is accommodating demand, but it is acknowledged that delay is also widely used (for example, in the Highway Capacity Manual). Whether one parameter or the other is the most relevant is the subject of ongoing debate in the profession. It is advisable to consider both parameters in the assessment of an intersection, at the level of the individual movement, the approach and the intersection as a whole. Scope The Guide provides a set of techniques that can be applied to operational, design and planning problems at signalized intersections. The operational procedures deal with a detailed assessment of operating conditions within a relatively short time frame when all factors are known or can be reasonably estimated. The design process is used to determine specific control parameters and geometric features of an intersection that will meet desired design objectives and performance criteria. Planning techniques, often called functional design, are useful for longer range problems, assisting in the determination of the type of the facility and its basic dimensions. The basic method remains the same for all three application types, but the level of detail varies. Wherever possible, the Guide utilizes formula-oriented techniques that can be applied both in manual calculations and computer programs, including spreadsheet tables. Although advanced simulation and other computerized techniques may prove to be superior to formula based methods in the future, the understanding of the fundamentals contained in the Guide remains essential. Where practical, measured input parameters and measured output performance criteria are preferable to calculated values. Correct and consistent survey methods as well as a critical assessment of the degree of precision and reliability of the survey results are essential. The principles and components of the timing design and evaluation processes are based on the international state-of-the-art in both the research and practice for intersection control. As a consequence, a knowledgeable user will find many similarities to other international documents. Nevertheless, some individual procedures, especially with respect to saturation flow and evaluation criteria, may differ because they were developed, tested, or adjusted for specific Canadian conditions. Some methods and parameter values are a direct result of the work on this Edition, but wherever possible, the original references or sources are identified. The Guide allows the evaluation of existing or future intersection control or geometric conditions relative to travel demand. It does not deal directly with broad systems or network issues, such as transportation demand management or congestion management. The results of the procedures included in the Guide, however, can be used as information for the evaluation of the impact of intersection control, or geometric alternatives on system aspects, such as population mobility, accessibility of various destinations or land use strategies. Although safety is an integral part of all traffic considerations, the Guide does not address this broad and complex issue explicitly. It is left to other specialized documents. Third Edition (2008) Similar to the First Edition, the new Third Edition of the Guide concentrates mostly on urban applications. Although the procedures focus on fixed-time signal operation, advice is provided for their adjustment to the design and evaluation of traffic responsive signal control, including the traffic actuated method. The objectives of the current 3rd edition of the Guide were as follows: to update and expand the Guide with respect to current practice; to consolidate the available Canadian information and experience on planning, design, and evaluation of signalized intersections in one document; to contribute to information exchange among Canadian transportation and traffic engineering professionals, and to further develop an advanced national practice; to provide guidance for both experienced and novice practitioners; to assist in the education of present and future transportation professionals. CITE & ITE The Canadian Capacity Guide for Signalized Intersections has been developed as a special project of the Canadian Institute of Transportation Engineers or CITE. This organization is composed of more than 1,700 transportation engineers, planners, technologists and students across Canada. CITE comprises District 7 of the Institute of Transportation Engineers, which consists of transportation professionals in more than 70 countries who are responsible for the safe and efficient movement of people and goods on streets, highways and transit systems. Software Tools The CITE has sanctioned the development of a 3rd party software solution, InterCalc, that fully supports the CCG methodology. In addition, PTV Vistro, developed by PTV Group, has integrated the Canadian Capacity Guide (CCG) methods into the software since the release of Version 6. This integration allows users to analyze small to large signalized urban networks in a modern traffic analysis software platform. References External links Download the Canadian Capacity Guide Canadian Institute of Transportation Engineers website Institute of Transportation Engineers website InterCalc Software website Transportation engineering
Canadian Capacity Guide For Signalized Intersections
[ "Engineering" ]
1,439
[ "Civil engineering", "Transportation engineering", "Industrial engineering" ]
25,354,354
https://en.wikipedia.org/wiki/Double%20recursion
In recursive function theory, double recursion is an extension of primitive recursion which allows the definition of non-primitive recursive functions like the Ackermann function. Raphael M. Robinson called functions of two natural number variables G(n, x) double recursive with respect to given functions, if G(0, x) is a given function of x. G(n + 1, 0) is obtained by substitution from the function G(n, ·) and given functions. G(n + 1, x + 1) is obtained by substitution from G(n + 1, x), the function G(n, ·) and given functions. Robinson goes on to provide a specific double recursive function (originally defined by Rózsa Péter) G(0, x) = x + 1 G(n + 1, 0) = G(n, 1) G(n + 1, x + 1) = G(n, G(n + 1, x)) where the given functions are primitive recursive, but G is not primitive recursive. In fact, this is precisely the function now known as the Ackermann function. See also Primitive recursion Ackermann function References Computability theory Recursion
Double recursion
[ "Mathematics" ]
266
[ "Computability theory", "Mathematical logic stubs", "Mathematical logic", "Recursion" ]
25,358,908
https://en.wikipedia.org/wiki/Minimum%20k-cut
In mathematics, the minimum -cut is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least connected components. These edges are referred to as -cut. The goal is to find the minimum-weight -cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing. Formal definition Given an undirected graph with an assignment of weights to the edges and an integer partition into disjoint sets while minimizing For a fixed , the problem is polynomial time solvable in However, the problem is NP-complete if is part of the input. It is also NP-complete if we specify vertices and ask for the minimum -cut which separates these vertices among each of the sets. Approximations Several approximation algorithms exist with an approximation of A simple greedy algorithm that achieves this approximation factor computes a minimum cut in each of the connected components and removes the lightest one. This algorithm requires a total of max flow computations. Another algorithm achieving the same guarantee uses the Gomory–Hu tree representation of minimum cuts. Constructing the Gomory–Hu tree requires max flow computations, but the algorithm requires an overall max flow computations. Yet, it is easier to analyze the approximation factor of the second algorithm. Moreover, under the small set expansion hypothesis (a conjecture closely related to the unique games conjecture), the problem is NP-hard to approximate to within factor for every constant , meaning that the aforementioned approximation algorithms are essentially tight for large . A variant of the problem asks for a minimum weight -cut where the output partitions have pre-specified sizes. This problem variant is approximable to within a factor of 3 for any fixed if one restricts the graph to a metric space, meaning a complete graph that satisfies the triangle inequality. More recently, polynomial time approximation schemes (PTAS) were discovered for those problems. While the minimum -cut problem is W[1]-hard parameterized by , a parameterized approximation scheme can be obtained for this parameter. See also Maximum cut Minimum cut Notes References NP-complete problems Combinatorial optimization Computational problems in graph theory Approximation algorithms
Minimum k-cut
[ "Mathematics" ]
450
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Approximation algorithms", "Computational problems", "Mathematical relations", "Mathematical problems", "Approximations", "NP-complete problems" ]
25,359,516
https://en.wikipedia.org/wiki/Spin-polarized%20electron%20energy%20loss%20spectroscopy
Spin-polarized electron energy loss spectroscopy or SPEELS is a technique mainly used to measure the dispersion relation of the collective excitations, over the whole Brillouin zone. Spin waves are collective perturbations in a magnetic solid. Their properties depend on their wavelength (or wave vector). For long wavelength (short wave vector) spin waves, the resulting spin precession has a very low frequency and the spin waves can be treated classically. Ferromagnetic resonance (FMR) and Brillouin light scattering (BLS) experiments explain the long wavelength spin waves in ultrathin magnetic films and nanostructures. If the wavelength is comparable to the lattice constant, the spin waves are governed by the microscopic exchange coupling and a quantum mechanical description is needed. Therefore, experimental information on these short wavelength (large wave vector) spin waves in ultrathin films is highly desired and may lead to fundamentally new insights into the spin dynamics in reduced dimensions in the future. SPEELS is one of the few techniques that can be used to measure the dispersion of such short wavelength spin waves in ultrathin films and nanostructures. The first experiment For the first time Kirschner's group in Max Planck institute of Microstructure Physics showed that the signature of the large wave vector spin waves can be detected by spin polarized electron energy loss spectroscopy (SPEELS). Later, with a better momentum resolution, the spin wave dispersion was fully measured in 8 monolayer (ML) fcc cobalt film on Cu(001) and 8 ML hcp cobalt on W(110), respectively. Those spin waves were obtained up to the surface Brillouin zone (SBZ) at the energy range about few hundreds of meV. Another recent example is the investigation of 1 and 2 monolayer iron films grown on W(110) measured at 120 K and 300 K, respectively. References Scattering Electron spectroscopy Scientific techniques
Spin-polarized electron energy loss spectroscopy
[ "Physics", "Chemistry", "Materials_science", "Astronomy" ]
405
[ "Spectroscopy stubs", "Spectrum (physical sciences)", "Electron spectroscopy", "Scattering stubs", "Astronomy stubs", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Molecular physics stubs", "Spectroscopy", "Physical chemistry stubs" ]
25,360,121
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Samuel%20function
In commutative algebra the Hilbert–Samuel function, named after David Hilbert and Pierre Samuel, of a nonzero finitely generated module over a commutative Noetherian local ring and a primary ideal of is the map such that, for all , where denotes the length over . It is related to the Hilbert function of the associated graded module by the identity For sufficiently large , it coincides with a polynomial function of degree equal to , often called the Hilbert-Samuel polynomial (or Hilbert polynomial). Examples For the ring of formal power series in two variables taken as a module over itself and the ideal generated by the monomials x2 and y3 we have Degree bounds Unlike the Hilbert function, the Hilbert–Samuel function is not additive on an exact sequence. However, it is still reasonably close to being additive, as a consequence of the Artin–Rees lemma. We denote by the Hilbert-Samuel polynomial; i.e., it coincides with the Hilbert–Samuel function for large integers. Proof: Tensoring the given exact sequence with and computing the kernel we get the exact sequence: which gives us: . The third term on the right can be estimated by Artin-Rees. Indeed, by the lemma, for large n and some k, Thus, . This gives the desired degree bound. Multiplicity If is a local ring of Krull dimension , with -primary ideal , its Hilbert polynomial has leading term of the form for some integer . This integer is called the multiplicity of the ideal . When is the maximal ideal of , one also says is the multiplicity of the local ring . The multiplicity of a point of a scheme is defined to be the multiplicity of the corresponding local ring . See also j-multiplicity References Commutative algebra Algebraic geometry
Hilbert–Samuel function
[ "Mathematics" ]
366
[ "Fields of abstract algebra", "Commutative algebra", "Algebraic geometry" ]
3,615,903
https://en.wikipedia.org/wiki/Quantum%20inequalities
Quantum inequalities are local constraints on the magnitude and extent of distributions of negative energy density in space-time. Initially conceived to clear up a long-standing problem in quantum field theory (namely, the potential for unconstrained negative energy density at a point), quantum inequalities have proven to have a diverse range of applications. The form of the quantum inequalities is reminiscent of the uncertainty principle. Energy conditions in classical field theory Einstein's theory of General Relativity amounts to a description of the relationship between the curvature of space-time, on the one hand, and the distribution of matter throughout space-time on the other. This precise details of this relationship are determined by the Einstein equations . Here, the Einstein tensor describes the curvature of space-time, whilst the energy–momentum tensor describes the local distribution of matter. ( is a constant.) The Einstein equations express local relationships between the quantities involved—specifically, this is a system of coupled non-linear second order partial differential equations. A very simple observation can be made at this point: the zero-point of energy-momentum is not arbitrary. Adding a "constant" to the right-hand side of the Einstein equations will effect a change in the Einstein tensor, and thus also in the curvature properties of space-time. All known classical matter fields obey certain "energy conditions". The most famous classical energy condition is the "weak energy condition"; this asserts that the local energy density, as measured by an observer moving along a time-like world line, is non-negative. The weak energy condition is essential for many of the most important and powerful results of classical relativity theory—in particular, the singularity theorems of Hawking et al. Hawking radiation suggests that black holes emit thermal energy due to quantum effects, even though nothing escapes their event horizon directly. This process aligns with quantum inequalities, which set strict limits on how much energy can appear or disappear in a given space. These inequalities ensure that Hawking radiation remains consistent with the laws of physics, reinforcing the reality of both phenomena and their connection in extreme spacetime conditions. In addition, we have The Penrose inequality which is a rule that says the mass (or energy) of a black hole is related to the size of its event horizon (the boundary beyond which nothing can escape). This idea supports "cosmic censorship," which is the idea that we can never directly see a "naked" singularity (a point of infinite density inside a black hole). In the quantum world, which deals with very small particles, this rule gets expanded to include something called "entropy." Entropy is a way to measure how disordered or chaotic a system is. The idea is that the total entropy (or disorder) of a system, including both the black hole and the quantum matter around it, should never decrease. This idea helps ensure that the laws of physics stay consistent, even in the strange world of quantum mechanics. Energy conditions in quantum field theory The situation in quantum field theory is rather different: the expectation value of the energy density can be negative at any given point. In fact, things are even worse: by tuning the state of the quantum matter field, the expectation value of the local energy density can be made arbitrarily negative. Inequalities The general form of worldline Quantum Inequality is the following equation. There are many variations towards quantum inequalities but this where all are derived from. For free, massless, minimally coupled scalar fields, for all the following inequality holds along any inertial observer worldline with velocity and proper time : This implies the averaged weak energy condition as , but also places stricter bounds on the length of episodes of negative energy. Similar bounds can be constructed for massive scalar or electromagnetic fields. Related theorems imply that pulses of negative energy need to be compensated by a larger positive pulse (with magnitude growing with increasing pulse separation). Note that the inequality above only applies to inertial observers: for accelerated observers weaker or no bounds entail. Applications Distributions of negative energy density comprise what is often referred to as exotic matter, and allow for several intriguing possibilities: for example, the Alcubierre drive potentially allows for faster-than-light space travel. Quantum inequalities constrain the magnitude and space-time extent of negative energy densities. In the case of the Alcubierre warp drive mentioned above, the quantum inequalities predict that the amount of exotic matter required to create and sustain the warp drive "bubble" far exceeds the total mass-energy of the universe. History The earliest investigations into quantum inequalities were carried out by Larry Ford and Tom Roman; an early collaborator was Mitchael Pfenning, one of Ford's students at Tufts University. Michael J. Pfenning's work on quantum inequalities showed that in a 2-D spacetime (Minkowski and Rindler) , the energy of the electromagnetic field behaves similarly to scalar fields due to the flat nature of spacetime. The difference is the electromagnetic field has two polarization states. However, in a 4-D curved spacetime (like Einstein's universe), the fields behave differently, resulting in distinct quantum inequalities for each. This produces two separate equations for the electromagnetic and scalar fields Important work was also carried out by Eanna Flanagan. Flanagan's work expands on Vollick's findings, which help explain how energy behaves in certain types of spacetimes. This study specifically examines the energy of a free, massless particle within a two-dimensional space, which doesn’t directly apply to the three-dimensional space we experience in our world. More recently, Chris Fewster (of the University of York, in the UK) has applied rigorous mathematics to produce a variety of quite general quantum inequalities. Specific examples are for the free scalar field are computed. Additionally, QEIs are also developed for a specific type of quantum field theory called unitary, positive energy conformal field theories in two dimensions of space and time. In this setting, it's possible to calculate the probability of getting different results when measuring certain "smears" (or averages) of the stress-energy tensor, which represents the distribution of energy and momentum in space and time, when the system is in its lowest energy state (the vacuum state). Reiner Verch's work explores the role of quantum inequalities (QIs) in understanding the behavior of energy and particles in both quantum field theory and quantum mechanics. One key concept is the "backflow phenomenon," where particles appear to flow backward in certain situations, although this is governed by specific limits. Verch also examines Weyl quantization, which relates to the uncertainty principle, suggesting that it is impossible to fully determine both the position and momentum of a particle simultaneously. His research further highlights that, despite appearances, quantum systems exhibit underlying stability, reinforcing fundamental principles of quantum mechanics, including the uncertainty principle and dynamical stability. Stefan Hollands' work focuses on Quantum Energy Inequalities (QEIs), which are rules in physics that limit how much "negative energy" can appear in certain areas of space and time. He studies these limits for a specific type of theory called conformal field theories (CFTs), which are mathematical models used to describe particles and forces in a two-dimensional flat universe (Minkowski space). The QEIs depend on two key things: A weight function, which is like a mathematical tool to focus on specific areas. The central charges of the theory, numbers that describe how complex the theory is. Importantly, these limits do not depend on the specific state of the system, meaning they apply universally. Hollands shows how these rules work for different situations: when measuring energy along paths slower than light (timelike), at the speed of light (null), and across regions of space (spacelike), as well as over entire areas of spacetime. The takeaway is that these rules prevent too much negative energy from appearing in one spot, ensuring the theory stays consistent with fundamental principles like causality—the idea that causes happen before effects. This helps scientists understand how energy behaves in complex quantum systems. References External links Quantum field theory on curved spacetime at the Erwin Schrödinger Institute Quantum Energy Inequalities (University of York, UK) Quantum field theory
Quantum inequalities
[ "Physics" ]
1,734
[ "Quantum field theory", "Quantum mechanics" ]
3,616,613
https://en.wikipedia.org/wiki/Work%20%28thermodynamics%29
Thermodynamic work is one of the principal kinds of process by which a thermodynamic system can interact with and transfer energy to its surroundings. This results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work, to lift a weight, for example, or cause changes in electromagnetic, or gravitational variables. Also, the surroundings can perform thermodynamic work on a thermodynamic system, which is measured by an opposite sign convention. For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume or magnetic flux density and magnetization. In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power, measured in joules per second, and denoted with the unit watt (W). History 1824 Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, where he used the term motive power for work. Specifically, according to Carnot: We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised. 1845 In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water. In this experiment, the motion of the paddle wheel, through agitation and friction, heated the body of water, so as to increase its temperature. Both the temperature change of the water and the height of the fall of the weight were recorded. Using these values, Joule was able to determine the mechanical equivalent of heat. Joule estimated a mechanical equivalent of heat to be 819 ft•lbf/Btu (4.41 J/cal). The modern day definitions of heat, work, temperature, and energy all have connection to this experiment. In this arrangement of apparatus, it never happens that the process runs in reverse, with the water driving the paddles so as to raise the weight, not even slightly. Mechanical work was done by the apparatus of falling weight, pulley, and paddles, which lay in the surroundings of the water. Their motion scarcely affected the volume of the water. A quantity of mechanical work, measured as force × distance in the surroundings, that does not change the volume of the water, is said to be isochoric. Such work reaches the system only as friction, through microscopic modes, and is irreversible. It does not count as thermodynamic work. The energy supplied by the fall of the weight passed into the water as heat. Overview Conservation of energy A fundamental guiding principle of thermodynamics is the conservation of energy. The total energy of a system is the sum of its internal energy, of its potential energy as a whole system in an external force field, such as gravity, and of its kinetic energy as a whole system in motion. Thermodynamics has special concern with transfers of energy, from a body of matter, such as, for example a cylinder of steam, to the surroundings of the body, by mechanisms through which the body exerts macroscopic forces on its surroundings so as to lift a weight there; such mechanisms are the ones that are said to mediate thermodynamic work. Besides transfer of energy as work, thermodynamics admits transfer of energy as heat. For a process in a closed (no transfer of matter) thermodynamic system, the first law of thermodynamics relates changes in the internal energy (or other cardinal energy function, depending on the conditions of the transfer) of the system to those two modes of energy transfer, as work, and as heat. Adiabatic work is done without matter transfer and without heat transfer. In principle, in thermodynamics, for a process in a closed system, the quantity of heat transferred is defined by the amount of adiabatic work that would be needed to effect the change in the system that is occasioned by the heat transfer. In experimental practice, heat transfer is often estimated calorimetrically, through change of temperature of a known quantity of calorimetric material substance. Energy can also be transferred to or from a system through transfer of matter. The possibility of such transfer defines the system as an open system, as opposed to a closed system. By definition, such transfer is neither as work nor as heat. Changes in the potential energy of a body as a whole with respect to forces in its surroundings, and in the kinetic energy of the body moving as a whole with respect to its surroundings, are by definition excluded from the body's cardinal energy (examples are internal energy and enthalpy). Nearly reversible transfer of energy by work in the surroundings In the surroundings of a thermodynamic system, external to it, all the various mechanical and non-mechanical macroscopic forms of work can be converted into each other with no limitation in principle due to the laws of thermodynamics, so that the energy conversion efficiency can approach 100% in some cases; such conversion is required to be frictionless, and consequently adiabatic. In particular, in principle, all macroscopic forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule (see History section above). Some authors have considered this equivalence to the lifting of a weight as a defining characteristic of work. For example, with the apparatus of Joule's experiment in which, through pulleys, a weight descending in the surroundings drives the stirring of a thermodynamic system, the descent of the weight can be diverted by a re-arrangement of pulleys, so that it lifts another weight in the surroundings, instead of stirring the thermodynamic system. Such conversion may be idealized as nearly frictionless, though it occurs relatively quickly. It usually comes about through devices that are not simple thermodynamic systems (a simple thermodynamic system is a homogeneous body of material substances). For example, the descent of the weight in Joule's stirring experiment reduces the weight's total energy. It is described as loss of gravitational potential energy by the weight, due to change of its macroscopic position in the gravity field, in contrast to, for example, loss of the weight's internal energy due to changes in its entropy, volume, and chemical composition. Though it occurs relatively rapidly, because the energy remains nearly fully available as work in one way or another, such diversion of work in the surroundings may be idealized as nearly reversible, or nearly perfectly efficient. In contrast, the conversion of heat into work in a heat engine can never exceed the Carnot efficiency, as a consequence of the second law of thermodynamics. Such energy conversion, through work done relatively rapidly, in a practical heat engine, by a thermodynamic system on its surroundings, cannot be idealized, not even nearly, as reversible. Thermodynamic work done by a thermodynamic system on its surroundings is defined so as to comply with this principle. Historically, thermodynamics was about how a thermodynamic system could do work on its surroundings. Work done by and on a simple thermodynamic system Work done on, and work done by, a thermodynamic system need to be distinguished, through consideration of their precise mechanisms. Work done on a thermodynamic system, by devices or systems in the surroundings, is performed by actions such as compression, and includes shaft work, stirring, and rubbing. Such work done by compression is thermodynamic work as here defined. But shaft work, stirring, and rubbing are not thermodynamic work as here defined, in that they do not change the volume of the system against its resisting pressure. Work without change of volume is known as isochoric work, for example when an agency, in the surroundings of the system, drives a frictional action on the surface or in the interior of the system. In a process of transfer of energy from or to a thermodynamic system, the change of internal energy of the system is defined in theory by the amount of adiabatic work that would have been necessary to reach the final from the initial state, such adiabatic work being measurable only through the externally measurable mechanical or deformation variables of the system, that provide full information about the forces exerted by the surroundings on the system during the process. In the case of some of Joule's measurements, the process was so arranged that some heating that occurred outside the system (in the substance of the paddles) by the frictional process also led to heat transfer from the paddles into the system during the process, so that the quantity of work done by the surrounds on the system could be calculated as shaft work, an external mechanical variable. The amount of energy transferred as work is measured through quantities defined externally to the system of interest, and thus belonging to its surroundings. In an important sign convention, preferred in chemistry, work that adds to the internal energy of the system is counted as positive. On the other hand, for historical reasons, an oft-encountered sign convention, preferred in physics, is to consider work done by the system on its surroundings as positive. Processes not described by macroscopic work Transfer of thermal energy through direct contact between a closed system and its surroundings, is by the microscopic thermal motions of particles and their associated inter-molecular potential energies. The microscopic description of such processes are the province of statistical mechanics, not of macroscopic thermodynamics. Another kind of energy transfer is by radiation, performing work on the system. Radiative transfer of energy is irreversible in the sense that it occurs only from a hotter to a colder system. There are several forms of dissipative transduction of energy that can occur internally within a system at a microscopic level, such as friction including bulk and shear viscosity chemical reaction, unconstrained expansion as in Joule expansion and in diffusion, and phase change. Open systems For an open system, the first law of thermodynamics admits three forms of energy transfer, as work, as heat, and as energy associated with matter that is transferred. The latter cannot be split uniquely into heat and work components. One-way convection of internal energy is a form a transport of energy but is not, as sometimes mistakenly supposed (a relic of the caloric theory of heat), transfer of energy as heat, because one-way convection is transfer of matter; nor is it transfer of energy as work. Nevertheless, if the wall between the system and its surroundings is thick and contains fluid, in the presence of a gravitational field, convective circulation within the wall can be considered as indirectly mediating transfer of energy as heat between the system and its surroundings, though the source and destination of the transferred energy are not in direct contact. Fictively imagined reversible thermodynamic "processes" For purposes of theoretical calculations about a thermodynamic system, one can imagine fictive idealized thermodynamic "processes" that occur so slowly that they do not incur friction within or on the surface of system; they can then be regarded as virtually reversible. These fictive processes proceed along paths on geometrical surfaces that are described exactly by a characteristic equation of the thermodynamic system. Those geometrical surfaces are the loci of possible states of thermodynamic equilibrium for the system. Really possible thermodynamic processes, occurring at practical rates, even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, always incur friction within the system, and so are always irreversible. The paths of such really possible processes always depart from those geometrical characteristic surfaces. Even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, such departures always entail entropy production. Joule heating and rubbing The definition of thermodynamic work is in terms of the changes of the system's extensive deformation (and chemical constitutive and certain other) state variables, such as volume, molar chemical constitution, or electric polarisation. Examples of state variables that are not extensive deformation or other such variables are temperature and entropy , as for example in the expression . Changes of such variables are not actually physically measureable by use of a single simple adiabatic thermodynamic process; they are processes that occur neither by thermodynamic work nor by transfer of matter, and therefore are said occur by heat transfer. The quantity of thermodynamic work is defined as work done by the system on its surroundings. According to the second law of thermodynamics, such work is irreversible. To get an actual and precise physical measurement of a quantity of thermodynamic work, it is necessary to take account of the irreversibility by restoring the system to its initial condition by running a cycle, for example a Carnot cycle, that includes the target work as a step. The work done by the system on its surroundings is calculated from the quantities that constitute the whole cycle. A different cycle would be needed to actually measure the work done by the surroundings on the system. This is a reminder that rubbing the surface of a system appears to the rubbing agent in the surroundings as mechanical, though not thermodynamic, work done on the system, not as heat, but appears to the system as heat transferred to the system, not as thermodynamic work. The production of heat by rubbing is irreversible; historically, it was a piece of evidence for the rejection of the caloric theory of heat as a conserved substance. The irreversible process known as Joule heating also occurs through a change of a non-deformation extensive state variable. Accordingly, in the opinion of Lavenda, work is not as primitive concept as is heat, which can be measured by calorimetry. This opinion does not negate the now customary thermodynamic definition of heat in terms of adiabatic work. Known as a thermodynamic operation, the initiating factor of a thermodynamic process is, in many cases, a change in the permeability of a wall between the system and the surroundings. Rubbing is not a change in wall permeability. Kelvin's statement of the second law of thermodynamics uses the notion of an "inanimate material agency"; this notion is sometimes regarded as puzzling. The triggering of a process of rubbing can occur only in the surroundings, not in a thermodynamic system in its own state of internal thermodynamic equilibrium. Such triggering may be described as a thermodynamic operation. Formal definition In thermodynamics, the quantity of work done by a closed system on its surroundings is defined by factors strictly confined to the interface of the surroundings with the system and to the surroundings of the system, for example, an extended gravitational field in which the system sits, that is to say, to things external to the system. A main concern of thermodynamics is the properties of materials. Thermodynamic work is defined for the purposes of thermodynamic calculations about bodies of material, known as thermodynamic systems. Consequently, thermodynamic work is defined in terms of quantities that describe the states of materials, which appear as the usual thermodynamic state variables, such as volume, pressure, temperature, chemical composition, and electric polarization. For example, to measure the pressure inside a system from outside it, the observer needs the system to have a wall that can move by a measurable amount in response to pressure differences between the interior of the system and the surroundings. In this sense, part of the definition of a thermodynamic system is the nature of the walls that confine it. Several kinds of thermodynamic work are especially important. One simple example is pressure–volume work. The pressure of concern is that exerted by the surroundings on the surface of the system, and the volume of interest is the negative of the increment of volume gained by the system from the surroundings. It is usually arranged that the pressure exerted by the surroundings on the surface of the system is well defined and equal to the pressure exerted by the system on the surroundings. This arrangement for transfer of energy as work can be varied in a particular way that depends on the strictly mechanical nature of pressure–volume work. The variation consists in letting the coupling between the system and surroundings be through a rigid rod that links pistons of different areas for the system and surroundings. Then for a given amount of work transferred, the exchange of volumes involves different pressures, inversely with the piston areas, for mechanical equilibrium. This cannot be done for the transfer of energy as heat because of its non-mechanical nature. Another important kind of work is isochoric work, i.e., work that involves no eventual overall change of volume of the system between the initial and the final states of the process. Examples are friction on the surface of the system as in Rumford's experiment; shaft work such as in Joule's experiments; stirring of the system by a magnetic paddle inside it, driven by a moving magnetic field from the surroundings; and vibrational action on the system that leaves its eventual volume unchanged, but involves friction within the system. Isochoric mechanical work for a body in its own state of internal thermodynamic equilibrium is done only by the surroundings on the body, not by the body on the surroundings, so that the sign of isochoric mechanical work with the physics sign convention is always negative. When work, for example pressure–volume work, is done on its surroundings by a closed system that cannot pass heat in or out because it is confined by an adiabatic wall, the work is said to be adiabatic for the system as well as for the surroundings. When mechanical work is done on such an adiabatically enclosed system by the surroundings, it can happen that friction in the surroundings is negligible, for example in the Joule experiment with the falling weight driving paddles that stir the system. Such work is adiabatic for the surroundings, even though it is associated with friction within the system. Such work may or may not be isochoric for the system, depending on the system and its confining walls. If it happens to be isochoric for the system (and does not eventually change other system state variables such as magnetization), it appears as a heat transfer to the system, and does not appear to be adiabatic for the system. Sign convention In the early history of thermodynamics, a positive amount of work done by the system on the surroundings leads to energy being lost from the system. This historical sign convention has been used in many physics textbooks and is used in the present article. According to the first law of thermodynamics for a closed system, any net change in the internal energy U must be fully accounted for, in terms of heat Q entering the system and work W done by the system: An alternate sign convention is to consider the work performed on the system by its surroundings as positive. This leads to a change in sign of the work, so that . This convention has historically been used in chemistry, and has been adopted by most physics textbooks. This equation reflects the fact that the heat transferred and the work done are not properties of the state of the system. Given only the initial state and the final state of the system, one can only say what the total change in internal energy was, not how much of the energy went out as heat, and how much as work. This can be summarized by saying that heat and work are not state functions of the system. This is in contrast to classical mechanics, where net work exerted by a particle is a state function. Pressure–volume work Pressure–volume work (or PV or P-V work) occurs when the volume of a system changes. PV work is often measured in units of litre-atmospheres where . However, the litre-atmosphere is not a recognized unit in the SI system of units, which measures P in pascals (Pa), V in m3, and PV in joules (J), where 1 J = 1 Pa·m3. PV work is an important topic in chemical thermodynamics. For a process in a closed system, occurring slowly enough for accurate definition of the pressure on the inside of the system's wall that moves and transmits force to the surroundings, described as quasi-static, work is represented by the following equation between differentials: where (inexact differential) denotes an infinitesimal increment of work done by the system, transferring energy to the surroundings; denotes the pressure inside the system, that it exerts on the moving wall that transmits force to the surroundings. In the alternative sign convention the right hand side has a negative sign. (exact differential) denotes an infinitesimal increment of the volume of the system. Moreover, where denotes the work done by the system during the whole of the reversible process. The first law of thermodynamics can then be expressed as (In the alternative sign convention where W = work done on the system, . However, is unchanged.) Path dependence PV work is path-dependent and is, therefore, a thermodynamic process function. In general, the term is not an exact differential. The statement that a process is quasi-static gives important information about the process but does not determine the P–V path uniquely, because the path can include several slow goings backwards and forward in volume, slowly enough to exclude friction within the system occasioned by departure from the quasi-static requirement. An adiabatic wall is one that does not permit passage of energy by conduction or radiation. The first law of thermodynamics states that . For a quasi-static adiabatic process, so that Also so that It follows that so that Internal energy is a state function so its change depends only on the initial and final states of a process. For a quasi-static adiabatic process, the change in internal energy is equal to minus the integral amount of work done by the system, so the work also depends only on the initial and final states of the process and is one and the same for every intermediate path. As a result, the work done by the system also depends on the initial and final states. If the process path is other than quasi-static and adiabatic, there are indefinitely many different paths, with significantly different work amounts, between the initial and final states. (Again the internal energy change depends only on the initial and final states as it is a state function). In the current mathematical notation, the differential is an inexact differential. In another notation, is written (with a horizontal line through the d). This notation indicates that is not an exact one-form. The line-through is merely a flag to warn us there is actually no function (0-form) which is the potential of . If there were, indeed, this function , we should be able to just use Stokes Theorem to evaluate this putative function, the potential of , at the boundary of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point in the PV diagram; work presupposes a path. Other mechanical types of work There are several ways of doing mechanical work, each in some way related to a force acting through a distance. In basic mechanics, the work done by a constant force F on a body displaced a distance s in the direction of the force is given by If the force is not constant, the work done is obtained by integrating the differential amount of work, Rotational work Energy transmission with a rotating shaft is very common in engineering practice. Often the torque T applied to the shaft is constant which means that the force F applied is constant. For a specified constant torque, the work done during n revolutions is determined as follows: A force F acting through a moment arm r generates a torque T This force acts through a distance s, which is related to the radius r by The shaft work is then determined from: The power transmitted through the shaft is the shaft work done per unit time, which is expressed as Spring work When a force is applied on a spring, and the length of the spring changes by a differential amount dx, the work done is For linear elastic springs, the displacement x is proportional to the force applied where K is the spring constant and has the unit of N/m. The displacement x is measured from the undisturbed position of the spring (that is, when ). Substituting the two equations , where x1 and x2 are the initial and the final displacement of the spring respectively, measured from the undisturbed position of the spring. Work done on elastic solid bars Solids are often modeled as linear springs because under the action of a force they contract or elongate, and when the force is lifted, they return to their original lengths, like a spring. This is true as long as the force is in the elastic range, that is, not large enough to cause permanent or plastic deformation. Therefore, the equations given for a linear spring can also be used for elastic solid bars. Alternately, we can determine the work associated with the expansion or contraction of an elastic solid bar by replacing the pressure P by its counterpart in solids, normal stress in the work expansion where A is the cross sectional area of the bar. Work associated with the stretching of liquid film Consider a liquid film such as a soap film suspended on a wire frame. Some force is required to stretch this film by the movable portion of the wire frame. This force is used to overcome the microscopic forces between molecules at the liquid-air interface. These microscopic forces are perpendicular to any line in the surface and the force generated by these forces per unit length is called the surface tension σ whose unit is N/m. Therefore, the work associated with the stretching of a film is called surface tension work, and is determined from where is the change in the surface area of the film. The factor 2 is due to the fact that the film has two surfaces in contact with air. The force acting on the moveable wire as a result of surface tension effects is , where σ is the surface tension force per unit length. Free energy and exergy The amount of useful work which may be extracted from a thermodynamic system is determined by the second law of thermodynamics. Under many practical situations this can be represented by the thermodynamic availability, or Exergy, function. Two important cases are: in thermodynamic systems where the temperature and volume are held constant, the measure of useful work attainable is the Helmholtz free energy function; and in systems where the temperature and pressure are held constant, the measure of useful work attainable is the Gibbs free energy. Non-mechanical forms of work Non-mechanical work in thermodynamics is work caused by external force fields that a system is exposed to. The action of such forces can be initiated by events in the surroundings of the system, or by thermodynamic operations on the shielding walls of the system. The non-mechanical work of force fields can have either positive or negative sign, work being done by the system on the surroundings, or vice versa. Work done by force fields can be done indefinitely slowly, so as to approach the fictive reversible quasi-static ideal, in which entropy is not created in the system by the process. In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surroundings; the transfer of energy can then be considered as convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system. Non-mechanical work contrasts with pressure–volume work. Pressure–volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is due to the pressure exerted on the interfacing wall by the material inside the system; that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure–volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure–volume work can have either positive or negative sign. Pressure–volume work, performed slowly enough, can be made to approach the fictive reversible quasi-static ideal. Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure–volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy. Examples of non-mechanical work modes include Electric field work – where the force is defined by the surroundings' voltage (the electrical potential) and the generalized displacement is change of spatial distribution of electrical charge Electrical polarization work – where the force is defined by the surroundings' electric field strength and the generalized displacement is change of the polarization of the medium (the sum of the electric dipole moments of the molecules) Magnetic work – where the force is defined by the surroundings' magnetic field strength and the generalized displacement is change of total magnetic dipole moment Gravitational work Gravitational work is defined by the force on a body measured in a gravitational field. It may cause a generalized displacement in the form of change of the spatial distribution of the matter within the system. The system gains internal energy (or other relevant cardinal quantity of energy, such as enthalpy) through internal friction. As seen by the surroundings, such frictional work appears as mechanical work done on the system, but as seen by the system, it appears as transfer of energy as heat. When the system is in its own state of internal thermodynamic equilibrium, its temperature is uniform throughout. If the volume and other extensive state variables, apart from entropy, are held constant over the process, then the transferred heat must appear as increased temperature and entropy; in a uniform gravitational field, the pressure of the system will be greater at the bottom than at the top. By definition, the relevant cardinal energy function is distinct from the gravitational potential energy of the system as a whole; the latter may also change as a result of gravitational work done by the surroundings on the system. The gravitational potential energy of the system is a component of its total energy, alongside its other components, namely its cardinal thermodynamic (e.g. internal) energy and its kinetic energy as a whole system in motion. See also Electrochemical hydrogen compressor Chemical reactions Microstate (statistical mechanics) - includes Microscopic definition of work References Thermodynamics
Work (thermodynamics)
[ "Physics", "Chemistry", "Mathematics" ]
7,326
[ "Thermodynamics", "Dynamical systems" ]
3,616,959
https://en.wikipedia.org/wiki/Gonioreflectometer
A gonioreflectometer is a device for measuring a bidirectional reflectance distribution function (BRDF). The device consists of a light source illuminating the material to be measured and a sensor that captures light reflected from that material. The light source should be able to illuminate and the sensor should be able to capture data from a hemisphere around the target. The hemispherical rotation dimensions of the sensor and light source are the four dimensions of the BRDF. The 'gonio' part of the word refers to the device's ability to measure at different angles. Several similar devices have been built and used to capture data for similar functions. Most of these devices use a camera instead of the light intensity-measuring sensor to capture a two-dimensional sample of the target. Examples include: a spatial gonioreflectometer for capturing the SBRDF (McAllister, 2002). a camera gantry for capturing the light field (Levoy and Hanrahan, 1996). an unnamed device for capturing the bidirectional texture function (Dana et al., 1999). References Dana, Kristin et al. 1999. Reflectance and Texture of Real-World Surfaces. in ACM Transactions on Graphics. Volume 18, Issue 1 (January, 1999). New York, NY, USA: ACM Press. Pages 1-34. Foo, Sing Choong. 1997. A Gonioreflectometer for measuring the bidirectional reflectance of materials for use in illumination computations. Masters thesis. Cornell University. Ithaca, New York, USA. Levoy, Marc & Hanrahan, Pat. 1996. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. McAllister, David. 2002. A Generalized Surface Appearance Representation for Computer Graphics. PhD dissertation. University of North Carolina at Chapel Hill, Department of Computer Science. Chapel Hill, USA. 118p. Computer graphics Photometry Measuring instruments
Gonioreflectometer
[ "Technology", "Engineering" ]
406
[ "Measuring instruments" ]
3,618,245
https://en.wikipedia.org/wiki/Safety%20glass
Safety glass is glass with additional safety features that make it less likely to break, or less likely to pose a threat when broken. Common designs include toughened glass (also known as tempered glass), laminated glass, and wire mesh glass (also known as wired glass). Toughened glass was invented in 1874 by Francois Barthelemy Alfred Royer de la Bastie. Wire mesh glass was invented in 1892 by Frank Shuman. Laminated glass was invented in 1903 by the French chemist Édouard Bénédictus (1878–1930). These three approaches can easily be combined, allowing for the creation of glass that is at the same time toughened, laminated, and contains a wire mesh. However, combination of a wire mesh with other techniques is unusual, as it typically betrays their individual qualities. In many developed countries safety glass is part of the building regulations making properties safer. Toughened glass Toughened glass is processed by controlled thermal or chemical treatments to increase its strength compared with normal glass. Tempering, by design, creates balanced internal stresses which causes the glass sheet, when broken, to crumble into small granular chunks of similar size and shape instead of splintering into random, jagged shards. The granular chunks are less likely to cause injury. As a result of its safety and strength, tempered glass is used in a variety of demanding applications, including passenger vehicle windows, shower doors, architectural glass doors and tables, refrigerator trays, as a component of bulletproof glass, for diving masks, and various types of plates and cookware. In the United States, since 1977 Federal law has required safety glass located within doors and tub and shower enclosures. Laminated glass Laminated glass is composed of layers of glass and plastic held together by an interlayer. When laminated glass is broken, it is held in place by an interlayer, typically of polyvinyl butyral (PVB), between its two or more layers of glass, which crumble into small pieces. The interlayer keeps the layers of glass bonded even when broken, and its toughening prevents the glass from breaking up into large sharp pieces. This produces a characteristic "spider web" cracking pattern (radial and concentric cracks) when the impact is not enough to completely pierce the glass. Laminated glass is normally used when there is a possibility of human impact or where the glass could fall if shattered. Skylight glazing and automobile windshields typically use laminated glass. In geographical areas requiring hurricane-resistant construction, laminated glass is often used in exterior storefronts, curtain walls and windows. The PVB interlayer also gives the glass a much higher sound insulation rating, due to the damping effect, and also blocks most of the incoming UV radiation (88% in window glass and 97.4% in windscreen glass). Wire mesh glass Wire mesh glass (also known as Georgian Wired Glass) has a grid or mesh of thin metal wire embedded within the glass. Wired glass is used in the US for its fire-resistant abilities, and is well-rated to withstand both heat and hose streams. This is why wired glass exclusively is used on service elevators to prevent fire ingress to the shaft, and also why it is commonly found in institutional settings which are often well-protected and partitioned against fire. The wire prevents the glass from falling out of the frame even if it cracks under thermal stress, and is far more heat-resistant than a laminating material. Wired glass, as it is typically described, does not perform the function most individuals associate with it. The presence of the wire mesh appears to be a strengthening component, as it is metallic, and conjures up the idea of rebar in reinforced concrete or other such examples. Despite this belief, wired glass is actually weaker than unwired glass due to the incursions of the wire into the structure of the glass. Wired glass often may cause heightened injury in comparison to unwired glass, as the wire amplifies the irregularity of any fractures. This has led to a decline in its use institutionally, particularly in schools. In recent years, new materials have become available that offer both fire-ratings and safety ratings so the continued use of wired glass is being debated worldwide. The US International Building Code effectively banned wired glass in 2006. Canada's building codes still permit the use of wired glass but the codes are being reviewed and traditional wired glass is expected to be greatly restricted in its use. Australia has no similar review taking place. See also Architectural glass References Glass
Safety glass
[ "Physics", "Chemistry" ]
922
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
8,370,210
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Englert
François, Baron Englert (; born 6 November 1932) is a Belgian theoretical physicist and 2013 Nobel Prize laureate. Englert is professor emeritus at the Université libre de Bruxelles (ULB), where he is a member of the Service de Physique Théorique. He is also a Sackler Professor by Special Appointment in the School of Physics and Astronomy at Tel Aviv University and a member of the Institute for Quantum Studies at Chapman University in California. He was awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics (with Gerry Guralnik, C. R. Hagen, Tom Kibble, Peter Higgs, and Robert Brout), the Wolf Prize in Physics in 2004 (with Brout and Higgs) and the High Energy and Particle Prize of the European Physical Society (with Brout and Higgs) in 1997 for the mechanism which unifies short and long range interactions by generating massive gauge vector bosons. Englert has made contributions in statistical physics, quantum field theory, cosmology, string theory and supergravity. He is the recipient of the 2013 Prince of Asturias Award in technical and scientific research, together with Peter Higgs and CERN. Englert was awarded the 2013 Nobel Prize in Physics, together with Peter Higgs for the discovery of the Brout–Englert–Higgs mechanism. Early life François Englert is a Holocaust survivor. He was born in a Belgian Jewish family. During the German occupation of Belgium in World War II, he had to conceal his Jewish identity and live in orphanages and children's homes in the towns of Dinant, Lustin, Stoumont and, finally, Annevoie-Rouillon. These towns were eventually liberated by the US Army. Academic career He graduated as an electromechanical engineer in 1955 from the Free University of Brussels (ULB) where he received his PhD in physical sciences in 1959. From 1959 until 1961, he worked at Cornell University, first as a research associate of Robert Brout and then as assistant professor. He then returned to the ULB, where he became a university professor and was joined there by Robert Brout who, in 1980, with Englert coheaded the theoretical physics group. In 1998 Englert became professor emeritus. In 1984 Englert was first appointed as a Sackler Professor by Special Appointment in the School of Physics and Astronomy at Tel-Aviv University. Englert joined Chapman University's Institute for Quantum Studies in 2011, where he serves as a distinguished visiting professor. Brout–Englert–Higgs–Guralnik–Hagen–Kibble mechanism Brout and Englert showed in 1964 that gauge vector fields, abelian and non-abelian, could acquire mass if empty space were endowed with a particular type of structure that one encounters in material systems. Focusing on the failure of the Goldstone theorem for gauge fields, Higgs reached essentially the same result. A third paper on the subject was written later in the same year by Gerald Guralnik, C. R. Hagen, and Tom Kibble. The three papers written on this boson discovery by Higgs, Englert and Brout, and Guralnik, Hagen, Kibble were each recognized as milestone papers for this discovery by Physical Review Letters 50th anniversary celebration. While each of these famous papers took similar approaches, the contributions and differences between the 1964 PRL symmetry breaking papers is noteworthy. To illustrate the structure, consider a ferromagnet which is composed of atoms each equipped with a tiny magnet. When these magnets are lined up, the inside of the ferromagnet bears a strong analogy to the way empty space can be structured. Gauge vector fields that are sensitive to this structure of empty space can only propagate over a finite distance. Thus, they mediate short range interactions and acquire mass. Those fields that are not sensitive to the structure propagate unhindered. They remain massless and are responsible for the long range interactions. In this way, the mechanism accommodates within a single unified theory both short and long-range interactions. Brout and Englert, Higgs, and Gerald Guralnik, C. R. Hagen, and Tom Kibble introduced as agent of the vacuum structure a scalar field (most often called the Higgs field) which many physicists view as the agent responsible for the masses of fundamental particles. Brout and Englert also showed that the mechanism may remain valid if the scalar field is replaced by a more structured agent such as a fermion condensate. Their approach led them to conjecture that the theory is renormalizable. The eventual proof of renormalizability, a major achievement of twentieth century physics, is due to Gerardus 't Hooft and Martinus Veltman who were awarded the 1999 Nobel Prize for this work. The Brout–Englert–Higgs–Guralnik–Hagen–Kibble mechanism is the building stone of the electroweak theory of elementary particles and laid the foundation of a unified view of the basic laws of nature. Major awards 1978 First Prize in the International Gravity Contest (with R. Brout and E. Gunzig), awarded by the Gravity Research Foundation for the essay "The Causal Universe". 1982 Francqui Prize, awarded by the Francqui Foundation once every four years in exact sciences "For his contribution to the theoretical understanding of spontaneous symmetry breaking in the physics of fundamental interactions, where, with Robert Brout, he was the first to show that spontaneous symmetry breaking in gauge theories gives mass to the gauge particles, for his extensive contributions in other domains, such as solid state physics, statistical mechanics, quantum field theory, general relativity and cosmology, for the originality and the fundamental importance of these achievements". 1997 High Energy and Particle Physics Prize (with R. Brout and P.W. Higgs), awarded by the European Physical Society "For formulating for the first time a self-consistent theory of charged massive vector bosons which became the foundation of the electroweak theory of elementary particles". 2004 Wolf Prize in Physics (with R. Brout and P.W. Higgs), awarded by the Wolf Foundation "For pioneering work that has led to the insight of mass generation, whenever a local gauge symmetry is realized asymmetrically in the world of sub-atomic particles". 2010 J. J. Sakurai Prize for Theoretical Particle Physics (with Guralnik, Hagen, Kibble, Higgs, and Brout) awarded by The American Physical Society "For elucidation of the properties of spontaneous symmetry breaking in four-dimensional relativistic gauge theory and of the mechanism for the consistent generation of vector boson masses". By Royal Decree of 8 July 2013 François Englert was ennobled a baron by King Albert II of Belgium. 2013 Nobel Prize in Physics, shared with Peter Higgs "for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider". 2013 Prince of Asturias Award for Technical and Scientific Research (with Peter Higgs and CERN) "for the theoretical prediction and experimental detection of the Higgs boson". See also List of Jewish Nobel laureates References External links François Englert's personal webpage 1932 births Living people People from Etterbeek Holocaust survivors Belgian Jews Belgian barons Belgian physicists Belgian Nobel laureates Jewish physicists Particle physicists Theoretical physicists Cornell University faculty Free University of Brussels (1834–1969) alumni Wolf Prize in Physics laureates J. J. Sakurai Prize for Theoretical Particle Physics recipients Nobel laureates in Physics Recipients of Princess of Asturias Awards
François Englert
[ "Physics" ]
1,617
[ "Theoretical physics", "Theoretical physicists", "Particle physics", "Particle physicists" ]
8,372,004
https://en.wikipedia.org/wiki/Periodic%20trends
In chemistry, periodic trends are specific patterns present in the periodic table that illustrate different aspects of certain elements when grouped by period and/or group. They were discovered by the Russian chemist Dmitri Mendeleev in 1863. Major periodic trends include atomic radius, ionization energy, electron affinity, electronegativity, nucleophilicity, electrophilicity, valency, nuclear charge, and metallic character. Mendeleev built the foundation of the periodic table. Mendeleev organized the elements based on atomic weight, leaving empty spaces where he believed undiscovered elements would take their places. Mendeleev’s discovery of this trend allowed him to predict the existence and properties of three unknown elements, which were later discovered by other chemists and named gallium, scandium, and germanium. English physicist Henry Moseley discovered that organizing the elements by atomic number instead of atomic weight would naturally group elements with similar properties. Summary of trends Atomic radius The atomic radius is the distance from the atomic nucleus to the outermost electron orbital in an atom. In general, the atomic radius decreases as we move from left-to-right in a period, and it increases when we go down a group. This is because in periods, the valence electrons are in the same outermost shell. The atomic number increases within the same period while moving from left to right, which in turn increases the effective nuclear charge. The increase in attractive forces reduces the atomic radius of elements. When we move down the group, the atomic radius increases due to the addition of a new shell. Nuclear charge and effective nuclear charge Nuclear charge is defined as the number of protons in the nucleus of an element. Thus, from left-to-right of a period and top-to-bottom of a group, as the number of protons in the nucleus increases, the nuclear charge will also increase. However, electrons of multi-electron atoms do not experience the entire nuclear charge due to shielding effects from the other electrons. In this case, the nuclear charge of atoms that experience this shielding is referred to as effective nuclear charge. Shielding increases as the number of an atom’s inner shells increases. So from left-to-right of a period, the effective nuclear charge will still increase. But, from top-to-bottom of a group, as the number of shells increases, the effective nuclear charge will decrease. Ionization energy The ionization energy is the minimum amount of energy that an electron in a gaseous atom or ion has to absorb to come out of the influence of the attracting force of the nucleus. It is also referred to as ionization potential. The first ionization energy is the amount of energy that is required to remove the first electron from a neutral atom. The energy needed to remove the second electron from the neutral atom is called the second ionization energy and so on. As one moves from left-to-right across a period in the modern periodic table, the ionization energy increases as the nuclear charge increases and the atomic size decreases. The decrease in the atomic size results in a more potent force of attraction between the electrons and the nucleus. However, suppose one moves down in a group. In that case, the ionization energy decreases as atomic size increases due to adding a valence shell, thereby diminishing the nucleus's attraction to electrons. Electron affinity The energy released when an electron is added to a neutral gaseous atom to form an anion is known as electron affinity. Trend-wise, as one progresses from left to right across a period, the electron affinity will increase as the nuclear charge increases and the atomic size decreases resulting in a more potent force of attraction of the nucleus and the added electron. However, as one moves down in a group, electron affinity decreases because atomic size increases due to the addition of a valence shell, thereby weakening the nucleus's attraction to electrons. Although it may seem that fluorine should have the greatest electron affinity, its small size generates enough repulsion among the electrons, resulting in chlorine having the highest electron affinity in the halogen family. Electronegativity The tendency of an atom in a molecule to attract the shared pair of electrons towards itself is known as electronegativity. It is a dimensionless quantity because it is only a tendency. The most commonly used scale to measure electronegativity was designed by Linus Pauling. The scale has been named the Pauling scale in his honour. According to this scale, fluorine is the most electronegative element, while cesium is the least electronegative element. Trend-wise, as one moves from left to right across a period in the modern periodic table, the electronegativity increases as the nuclear charge increases and the atomic size decreases. However, if one moves down in a group, the electronegativity decreases as atomic size increases due to the addition of a valence shell, thereby decreasing the atom's attraction to electrons. However, in group XIII (boron family), the electronegativity first decreases from boron to aluminium and then increases down the group. It is due to the fact that the atomic size increases as we move down the group, but at the same time the effective nuclear charge increases due to poor shielding of the inner d and f electrons. As a result, the force of attraction of the nucleus for the electrons increases and hence the electronegativity increases from aluminium to thallium. Valency The valency of an element is the number of electrons that must be lost or gained by an atom to obtain a stable electron configuration. In simple terms, it is the measure of the combining capacity of an element to form chemical compounds. Electrons found in the outermost shell are generally known as valence electrons; the number of valence electrons determines the valency of an atom. Trend-wise, while moving from left to right across a period, the number of valence electrons of elements increases and varies between one and eight. But the valency of elements first increases from 1 to 4, and then it decreases to 0 as we reach the noble gases. However, as we move down in a group, the number of valence electrons generally does not change. Hence, in many cases the elements of a particular group have the same valency. However, this periodic trend is not always followed for heavier elements, especially for the f-block and the transition metals. These elements show variable valency as these elements have a d-orbital as the penultimate orbital and an s-orbital as the outermost orbital. The energies of these (n-1)d and ns orbitals (e.g., 4d and 5s) are relatively close. Metallic and non-metallic properties Metallic properties generally increase down the groups, as decreasing attraction between the nuclei and outermost electrons causes these electrons to be more loosely bound and thus able to conduct heat and electricity. Across each period, from left to right, the increasing attraction between the nuclei and the outermost electrons causes the metallic character to decrease. In contrast, the nonmetallic character decreases down the groups and increases across the periods. Nucleophilicity and Electrophilicity Electrophilicity refers to the tendency of an electron-deficient species, called an electrophile, to accept electrons. Similarly, nucleophilicity is defined as the affinity of an electron-rich species, known as a nucleophile, to donate electrons to another species. Trends in the periodic table are useful for predicting an element's nucleophilicity and electrophilicity. In general, nucleophilicity decreases as electronegativity increases, meaning that nucleophilicity decreases from left to right across the periodic table. On the other hand, electrophilicity generally increases as electronegativity increases, meaning that electrophilicity follows an increasing trend from left to right on the periodic table. However, the specific molecular or chemical environment of the electrophile also influences electrophilicity. Therefore, electrophilicity cannot be accurately predicted based solely on periodic trends. See also Periodic table History of the periodic table List of elements by atomic properties References Further reading Periodic Table Of Elements (IUPAC) Properties of chemical elements
Periodic trends
[ "Chemistry" ]
1,693
[ "Properties of chemical elements" ]